00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2388 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3649 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.074 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.155 Fetching changes from the remote Git repository 00:00:00.158 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.234 Using shallow fetch with depth 1 00:00:00.234 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.235 > git --version # timeout=10 00:00:00.298 > git --version # 'git version 2.39.2' 00:00:00.298 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.334 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.334 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.379 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.391 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.402 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.402 > git config core.sparsecheckout # timeout=10 00:00:07.414 > git read-tree -mu HEAD # timeout=10 00:00:07.429 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.451 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.451 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.535 [Pipeline] Start of Pipeline 00:00:07.549 [Pipeline] library 00:00:07.550 Loading library shm_lib@master 00:00:07.550 Library shm_lib@master is cached. Copying from home. 00:00:07.567 [Pipeline] node 00:00:07.576 Running on CYP13 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.578 [Pipeline] { 00:00:07.587 [Pipeline] catchError 00:00:07.589 [Pipeline] { 00:00:07.601 [Pipeline] wrap 00:00:07.610 [Pipeline] { 00:00:07.617 [Pipeline] stage 00:00:07.619 [Pipeline] { (Prologue) 00:00:07.813 [Pipeline] sh 00:00:08.101 + logger -p user.info -t JENKINS-CI 00:00:08.119 [Pipeline] echo 00:00:08.120 Node: CYP13 00:00:08.129 [Pipeline] sh 00:00:08.444 [Pipeline] setCustomBuildProperty 00:00:08.455 [Pipeline] echo 00:00:08.456 Cleanup processes 00:00:08.461 [Pipeline] sh 00:00:08.752 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.752 205072 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.768 [Pipeline] sh 00:00:09.060 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.060 ++ grep -v 'sudo pgrep' 00:00:09.060 ++ awk '{print $1}' 00:00:09.060 + sudo kill -9 00:00:09.060 + true 00:00:09.079 [Pipeline] cleanWs 00:00:09.091 [WS-CLEANUP] Deleting project workspace... 00:00:09.091 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.099 [WS-CLEANUP] done 00:00:09.103 [Pipeline] setCustomBuildProperty 00:00:09.118 [Pipeline] sh 00:00:09.407 + sudo git config --global --replace-all safe.directory '*' 00:00:09.521 [Pipeline] httpRequest 00:00:09.853 [Pipeline] echo 00:00:09.854 Sorcerer 10.211.164.20 is alive 00:00:09.864 [Pipeline] retry 00:00:09.865 [Pipeline] { 00:00:09.883 [Pipeline] httpRequest 00:00:09.887 HttpMethod: GET 00:00:09.888 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.889 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.894 Response Code: HTTP/1.1 200 OK 00:00:09.894 Success: Status code 200 is in the accepted range: 200,404 00:00:09.895 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.769 [Pipeline] } 00:00:10.789 [Pipeline] // retry 00:00:10.798 [Pipeline] sh 00:00:11.089 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.107 [Pipeline] httpRequest 00:00:11.528 [Pipeline] echo 00:00:11.530 Sorcerer 10.211.164.20 is alive 00:00:11.541 [Pipeline] retry 00:00:11.543 [Pipeline] { 00:00:11.557 [Pipeline] httpRequest 00:00:11.562 HttpMethod: GET 00:00:11.563 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:11.564 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:11.575 Response Code: HTTP/1.1 200 OK 00:00:11.575 Success: Status code 200 is in the accepted range: 200,404 00:00:11.575 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:52.009 [Pipeline] } 00:00:52.026 [Pipeline] // retry 00:00:52.034 [Pipeline] sh 00:00:52.327 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:55.651 [Pipeline] sh 00:00:55.940 + git -C spdk log --oneline -n5 00:00:55.940 c13c99a5e test: Various fixes for Fedora40 00:00:55.940 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:55.940 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:55.940 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:55.940 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:55.952 [Pipeline] } 00:00:55.968 [Pipeline] // stage 00:00:55.977 [Pipeline] stage 00:00:55.979 [Pipeline] { (Prepare) 00:00:55.996 [Pipeline] writeFile 00:00:56.011 [Pipeline] sh 00:00:56.301 + logger -p user.info -t JENKINS-CI 00:00:56.315 [Pipeline] sh 00:00:56.607 + logger -p user.info -t JENKINS-CI 00:00:56.621 [Pipeline] sh 00:00:56.914 + cat autorun-spdk.conf 00:00:56.914 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.914 SPDK_TEST_NVMF=1 00:00:56.914 SPDK_TEST_NVME_CLI=1 00:00:56.914 SPDK_TEST_NVMF_NICS=mlx5 00:00:56.914 SPDK_RUN_UBSAN=1 00:00:56.914 NET_TYPE=phy 00:00:56.922 RUN_NIGHTLY=1 00:00:56.927 [Pipeline] readFile 00:00:56.955 [Pipeline] withEnv 00:00:56.958 [Pipeline] { 00:00:56.971 [Pipeline] sh 00:00:57.265 + set -ex 00:00:57.265 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:57.265 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:57.265 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.265 ++ SPDK_TEST_NVMF=1 00:00:57.265 ++ SPDK_TEST_NVME_CLI=1 00:00:57.265 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:57.265 ++ SPDK_RUN_UBSAN=1 00:00:57.265 ++ NET_TYPE=phy 00:00:57.265 ++ RUN_NIGHTLY=1 00:00:57.265 + case $SPDK_TEST_NVMF_NICS in 00:00:57.265 + DRIVERS=mlx5_ib 00:00:57.265 + [[ -n mlx5_ib ]] 00:00:57.265 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:57.265 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:07.348 rmmod: ERROR: Module irdma is not currently loaded 00:01:07.348 rmmod: ERROR: Module i40iw is not currently loaded 00:01:07.348 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:07.348 + true 00:01:07.348 + for D in $DRIVERS 00:01:07.348 + sudo modprobe mlx5_ib 00:01:07.348 + exit 0 00:01:07.359 [Pipeline] } 00:01:07.373 [Pipeline] // withEnv 00:01:07.377 [Pipeline] } 00:01:07.391 [Pipeline] // stage 00:01:07.401 [Pipeline] catchError 00:01:07.402 [Pipeline] { 00:01:07.414 [Pipeline] timeout 00:01:07.414 Timeout set to expire in 1 hr 0 min 00:01:07.416 [Pipeline] { 00:01:07.429 [Pipeline] stage 00:01:07.432 [Pipeline] { (Tests) 00:01:07.448 [Pipeline] sh 00:01:07.740 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:07.740 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:07.740 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:07.740 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:07.740 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:07.740 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:07.741 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:07.741 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:07.741 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:07.741 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:07.741 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:07.741 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:07.741 + source /etc/os-release 00:01:07.741 ++ NAME='Fedora Linux' 00:01:07.741 ++ VERSION='39 (Cloud Edition)' 00:01:07.741 ++ ID=fedora 00:01:07.741 ++ VERSION_ID=39 00:01:07.741 ++ VERSION_CODENAME= 00:01:07.741 ++ PLATFORM_ID=platform:f39 00:01:07.741 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:07.741 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:07.741 ++ LOGO=fedora-logo-icon 00:01:07.741 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:07.741 ++ HOME_URL=https://fedoraproject.org/ 00:01:07.741 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:07.741 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:07.741 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:07.741 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:07.741 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:07.741 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:07.741 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:07.741 ++ SUPPORT_END=2024-11-12 00:01:07.741 ++ VARIANT='Cloud Edition' 00:01:07.741 ++ VARIANT_ID=cloud 00:01:07.741 + uname -a 00:01:07.741 Linux spdk-cyp-13 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:07.741 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:11.044 Hugepages 00:01:11.044 node hugesize free / total 00:01:11.044 node0 1048576kB 0 / 0 00:01:11.044 node0 2048kB 0 / 0 00:01:11.044 node1 1048576kB 0 / 0 00:01:11.044 node1 2048kB 0 / 0 00:01:11.044 00:01:11.044 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:11.044 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:11.044 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:11.044 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:11.044 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:11.044 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:11.044 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:11.044 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:11.044 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:11.044 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:11.044 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:11.044 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:11.044 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:11.044 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:11.044 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:11.044 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:11.044 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:11.044 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:11.044 + rm -f /tmp/spdk-ld-path 00:01:11.044 + source autorun-spdk.conf 00:01:11.044 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.044 ++ SPDK_TEST_NVMF=1 00:01:11.044 ++ SPDK_TEST_NVME_CLI=1 00:01:11.044 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:11.044 ++ SPDK_RUN_UBSAN=1 00:01:11.044 ++ NET_TYPE=phy 00:01:11.044 ++ RUN_NIGHTLY=1 00:01:11.044 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:11.044 + [[ -n '' ]] 00:01:11.044 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:11.044 + for M in /var/spdk/build-*-manifest.txt 00:01:11.044 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:11.044 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:11.044 + for M in /var/spdk/build-*-manifest.txt 00:01:11.044 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:11.044 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:11.044 + for M in /var/spdk/build-*-manifest.txt 00:01:11.044 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:11.044 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:11.044 ++ uname 00:01:11.044 + [[ Linux == \L\i\n\u\x ]] 00:01:11.044 + sudo dmesg -T 00:01:11.044 + sudo dmesg --clear 00:01:11.044 + dmesg_pid=206066 00:01:11.044 + [[ Fedora Linux == FreeBSD ]] 00:01:11.044 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.044 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.044 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:11.044 + [[ -x /usr/src/fio-static/fio ]] 00:01:11.044 + export FIO_BIN=/usr/src/fio-static/fio 00:01:11.044 + FIO_BIN=/usr/src/fio-static/fio 00:01:11.044 + sudo dmesg -Tw 00:01:11.044 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:11.044 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:11.044 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:11.044 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.044 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.044 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:11.044 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.044 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.044 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:11.044 Test configuration: 00:01:11.044 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.044 SPDK_TEST_NVMF=1 00:01:11.044 SPDK_TEST_NVME_CLI=1 00:01:11.044 SPDK_TEST_NVMF_NICS=mlx5 00:01:11.044 SPDK_RUN_UBSAN=1 00:01:11.044 NET_TYPE=phy 00:01:11.044 RUN_NIGHTLY=1 12:28:44 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:11.044 12:28:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:11.044 12:28:44 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:11.044 12:28:44 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:11.045 12:28:44 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:11.045 12:28:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.045 12:28:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.045 12:28:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.045 12:28:44 -- paths/export.sh@5 -- $ export PATH 00:01:11.045 12:28:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.045 12:28:44 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:11.045 12:28:44 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:11.045 12:28:44 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732102124.XXXXXX 00:01:11.045 12:28:44 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732102124.O9DAmc 00:01:11.045 12:28:44 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:11.045 12:28:44 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:11.045 12:28:44 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:11.045 12:28:44 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:11.045 12:28:44 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:11.045 12:28:44 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:11.045 12:28:44 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:11.045 12:28:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.045 12:28:44 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:11.045 12:28:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:11.045 12:28:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:11.045 12:28:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:11.045 12:28:44 -- spdk/autobuild.sh@16 -- $ date -u 00:01:11.045 Wed Nov 20 11:28:44 AM UTC 2024 00:01:11.045 12:28:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:11.045 LTS-67-gc13c99a5e 00:01:11.045 12:28:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:11.045 12:28:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:11.045 12:28:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:11.045 12:28:44 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:11.045 12:28:44 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:11.045 12:28:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.045 ************************************ 00:01:11.045 START TEST ubsan 00:01:11.045 ************************************ 00:01:11.045 12:28:44 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:11.045 using ubsan 00:01:11.045 00:01:11.045 real 0m0.000s 00:01:11.045 user 0m0.000s 00:01:11.045 sys 0m0.000s 00:01:11.045 12:28:44 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:11.045 12:28:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.045 ************************************ 00:01:11.045 END TEST ubsan 00:01:11.045 ************************************ 00:01:11.306 12:28:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:11.306 12:28:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:11.306 12:28:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:11.306 12:28:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:11.306 12:28:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:11.306 12:28:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:11.307 12:28:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:11.307 12:28:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:11.307 12:28:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:11.307 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:11.307 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:12.249 Using 'verbs' RDMA provider 00:01:28.130 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:40.377 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:40.377 Creating mk/config.mk...done. 00:01:40.377 Creating mk/cc.flags.mk...done. 00:01:40.377 Type 'make' to build. 00:01:40.377 12:29:12 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:40.377 12:29:12 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:40.377 12:29:12 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:40.377 12:29:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.377 ************************************ 00:01:40.377 START TEST make 00:01:40.377 ************************************ 00:01:40.377 12:29:12 -- common/autotest_common.sh@1114 -- $ make -j144 00:01:40.377 make[1]: Nothing to be done for 'all'. 00:01:48.516 The Meson build system 00:01:48.516 Version: 1.5.0 00:01:48.516 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:48.516 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:48.516 Build type: native build 00:01:48.516 Program cat found: YES (/usr/bin/cat) 00:01:48.516 Project name: DPDK 00:01:48.516 Project version: 23.11.0 00:01:48.516 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:48.516 C linker for the host machine: cc ld.bfd 2.40-14 00:01:48.516 Host machine cpu family: x86_64 00:01:48.516 Host machine cpu: x86_64 00:01:48.516 Message: ## Building in Developer Mode ## 00:01:48.516 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.516 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.516 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.516 Program python3 found: YES (/usr/bin/python3) 00:01:48.516 Program cat found: YES (/usr/bin/cat) 00:01:48.516 Compiler for C supports arguments -march=native: YES 00:01:48.516 Checking for size of "void *" : 8 00:01:48.516 Checking for size of "void *" : 8 (cached) 00:01:48.516 Library m found: YES 00:01:48.516 Library numa found: YES 00:01:48.516 Has header "numaif.h" : YES 00:01:48.516 Library fdt found: NO 00:01:48.516 Library execinfo found: NO 00:01:48.516 Has header "execinfo.h" : YES 00:01:48.516 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:48.516 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.516 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.516 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.516 Run-time dependency openssl found: YES 3.1.1 00:01:48.516 Run-time dependency libpcap found: YES 1.10.4 00:01:48.516 Has header "pcap.h" with dependency libpcap: YES 00:01:48.516 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.516 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.516 Compiler for C supports arguments -Wformat: YES 00:01:48.516 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.516 Compiler for C supports arguments -Wformat-security: NO 00:01:48.516 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.516 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.516 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.516 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.516 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.516 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.516 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.516 Compiler for C supports arguments -Wundef: YES 00:01:48.516 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.516 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.516 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.516 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.516 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.516 Program objdump found: YES (/usr/bin/objdump) 00:01:48.516 Compiler for C supports arguments -mavx512f: YES 00:01:48.516 Checking if "AVX512 checking" compiles: YES 00:01:48.516 Fetching value of define "__SSE4_2__" : 1 00:01:48.516 Fetching value of define "__AES__" : 1 00:01:48.516 Fetching value of define "__AVX__" : 1 00:01:48.516 Fetching value of define "__AVX2__" : 1 00:01:48.516 Fetching value of define "__AVX512BW__" : 1 00:01:48.516 Fetching value of define "__AVX512CD__" : 1 00:01:48.516 Fetching value of define "__AVX512DQ__" : 1 00:01:48.516 Fetching value of define "__AVX512F__" : 1 00:01:48.516 Fetching value of define "__AVX512VL__" : 1 00:01:48.516 Fetching value of define "__PCLMUL__" : 1 00:01:48.516 Fetching value of define "__RDRND__" : 1 00:01:48.516 Fetching value of define "__RDSEED__" : 1 00:01:48.516 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:48.516 Fetching value of define "__znver1__" : (undefined) 00:01:48.516 Fetching value of define "__znver2__" : (undefined) 00:01:48.516 Fetching value of define "__znver3__" : (undefined) 00:01:48.516 Fetching value of define "__znver4__" : (undefined) 00:01:48.516 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.516 Message: lib/log: Defining dependency "log" 00:01:48.516 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.516 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.516 Checking for function "getentropy" : NO 00:01:48.516 Message: lib/eal: Defining dependency "eal" 00:01:48.516 Message: lib/ring: Defining dependency "ring" 00:01:48.516 Message: lib/rcu: Defining dependency "rcu" 00:01:48.516 Message: lib/mempool: Defining dependency "mempool" 00:01:48.516 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.516 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.516 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.516 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.516 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.516 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.516 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:48.516 Compiler for C supports arguments -mpclmul: YES 00:01:48.516 Compiler for C supports arguments -maes: YES 00:01:48.516 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.516 Compiler for C supports arguments -mavx512bw: YES 00:01:48.516 Compiler for C supports arguments -mavx512dq: YES 00:01:48.516 Compiler for C supports arguments -mavx512vl: YES 00:01:48.516 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.516 Compiler for C supports arguments -mavx2: YES 00:01:48.516 Compiler for C supports arguments -mavx: YES 00:01:48.516 Message: lib/net: Defining dependency "net" 00:01:48.516 Message: lib/meter: Defining dependency "meter" 00:01:48.516 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.516 Message: lib/pci: Defining dependency "pci" 00:01:48.516 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.516 Message: lib/hash: Defining dependency "hash" 00:01:48.516 Message: lib/timer: Defining dependency "timer" 00:01:48.516 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.516 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.516 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.516 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.516 Message: lib/power: Defining dependency "power" 00:01:48.516 Message: lib/reorder: Defining dependency "reorder" 00:01:48.516 Message: lib/security: Defining dependency "security" 00:01:48.516 Has header "linux/userfaultfd.h" : YES 00:01:48.516 Has header "linux/vduse.h" : YES 00:01:48.516 Message: lib/vhost: Defining dependency "vhost" 00:01:48.516 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.516 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.516 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.516 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.516 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.516 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.516 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.516 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.516 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.516 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.516 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:48.516 Configuring doxy-api-html.conf using configuration 00:01:48.516 Configuring doxy-api-man.conf using configuration 00:01:48.516 Program mandb found: YES (/usr/bin/mandb) 00:01:48.516 Program sphinx-build found: NO 00:01:48.516 Configuring rte_build_config.h using configuration 00:01:48.516 Message: 00:01:48.516 ================= 00:01:48.517 Applications Enabled 00:01:48.517 ================= 00:01:48.517 00:01:48.517 apps: 00:01:48.517 00:01:48.517 00:01:48.517 Message: 00:01:48.517 ================= 00:01:48.517 Libraries Enabled 00:01:48.517 ================= 00:01:48.517 00:01:48.517 libs: 00:01:48.517 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.517 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.517 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.517 00:01:48.517 Message: 00:01:48.517 =============== 00:01:48.517 Drivers Enabled 00:01:48.517 =============== 00:01:48.517 00:01:48.517 common: 00:01:48.517 00:01:48.517 bus: 00:01:48.517 pci, vdev, 00:01:48.517 mempool: 00:01:48.517 ring, 00:01:48.517 dma: 00:01:48.517 00:01:48.517 net: 00:01:48.517 00:01:48.517 crypto: 00:01:48.517 00:01:48.517 compress: 00:01:48.517 00:01:48.517 vdpa: 00:01:48.517 00:01:48.517 00:01:48.517 Message: 00:01:48.517 ================= 00:01:48.517 Content Skipped 00:01:48.517 ================= 00:01:48.517 00:01:48.517 apps: 00:01:48.517 dumpcap: explicitly disabled via build config 00:01:48.517 graph: explicitly disabled via build config 00:01:48.517 pdump: explicitly disabled via build config 00:01:48.517 proc-info: explicitly disabled via build config 00:01:48.517 test-acl: explicitly disabled via build config 00:01:48.517 test-bbdev: explicitly disabled via build config 00:01:48.517 test-cmdline: explicitly disabled via build config 00:01:48.517 test-compress-perf: explicitly disabled via build config 00:01:48.517 test-crypto-perf: explicitly disabled via build config 00:01:48.517 test-dma-perf: explicitly disabled via build config 00:01:48.517 test-eventdev: explicitly disabled via build config 00:01:48.517 test-fib: explicitly disabled via build config 00:01:48.517 test-flow-perf: explicitly disabled via build config 00:01:48.517 test-gpudev: explicitly disabled via build config 00:01:48.517 test-mldev: explicitly disabled via build config 00:01:48.517 test-pipeline: explicitly disabled via build config 00:01:48.517 test-pmd: explicitly disabled via build config 00:01:48.517 test-regex: explicitly disabled via build config 00:01:48.517 test-sad: explicitly disabled via build config 00:01:48.517 test-security-perf: explicitly disabled via build config 00:01:48.517 00:01:48.517 libs: 00:01:48.517 metrics: explicitly disabled via build config 00:01:48.517 acl: explicitly disabled via build config 00:01:48.517 bbdev: explicitly disabled via build config 00:01:48.517 bitratestats: explicitly disabled via build config 00:01:48.517 bpf: explicitly disabled via build config 00:01:48.517 cfgfile: explicitly disabled via build config 00:01:48.517 distributor: explicitly disabled via build config 00:01:48.517 efd: explicitly disabled via build config 00:01:48.517 eventdev: explicitly disabled via build config 00:01:48.517 dispatcher: explicitly disabled via build config 00:01:48.517 gpudev: explicitly disabled via build config 00:01:48.517 gro: explicitly disabled via build config 00:01:48.517 gso: explicitly disabled via build config 00:01:48.517 ip_frag: explicitly disabled via build config 00:01:48.517 jobstats: explicitly disabled via build config 00:01:48.517 latencystats: explicitly disabled via build config 00:01:48.517 lpm: explicitly disabled via build config 00:01:48.517 member: explicitly disabled via build config 00:01:48.517 pcapng: explicitly disabled via build config 00:01:48.517 rawdev: explicitly disabled via build config 00:01:48.517 regexdev: explicitly disabled via build config 00:01:48.517 mldev: explicitly disabled via build config 00:01:48.517 rib: explicitly disabled via build config 00:01:48.517 sched: explicitly disabled via build config 00:01:48.517 stack: explicitly disabled via build config 00:01:48.517 ipsec: explicitly disabled via build config 00:01:48.517 pdcp: explicitly disabled via build config 00:01:48.517 fib: explicitly disabled via build config 00:01:48.517 port: explicitly disabled via build config 00:01:48.517 pdump: explicitly disabled via build config 00:01:48.517 table: explicitly disabled via build config 00:01:48.517 pipeline: explicitly disabled via build config 00:01:48.517 graph: explicitly disabled via build config 00:01:48.517 node: explicitly disabled via build config 00:01:48.517 00:01:48.517 drivers: 00:01:48.517 common/cpt: not in enabled drivers build config 00:01:48.517 common/dpaax: not in enabled drivers build config 00:01:48.517 common/iavf: not in enabled drivers build config 00:01:48.517 common/idpf: not in enabled drivers build config 00:01:48.517 common/mvep: not in enabled drivers build config 00:01:48.517 common/octeontx: not in enabled drivers build config 00:01:48.517 bus/auxiliary: not in enabled drivers build config 00:01:48.517 bus/cdx: not in enabled drivers build config 00:01:48.517 bus/dpaa: not in enabled drivers build config 00:01:48.517 bus/fslmc: not in enabled drivers build config 00:01:48.517 bus/ifpga: not in enabled drivers build config 00:01:48.517 bus/platform: not in enabled drivers build config 00:01:48.517 bus/vmbus: not in enabled drivers build config 00:01:48.517 common/cnxk: not in enabled drivers build config 00:01:48.517 common/mlx5: not in enabled drivers build config 00:01:48.517 common/nfp: not in enabled drivers build config 00:01:48.517 common/qat: not in enabled drivers build config 00:01:48.517 common/sfc_efx: not in enabled drivers build config 00:01:48.517 mempool/bucket: not in enabled drivers build config 00:01:48.517 mempool/cnxk: not in enabled drivers build config 00:01:48.517 mempool/dpaa: not in enabled drivers build config 00:01:48.517 mempool/dpaa2: not in enabled drivers build config 00:01:48.517 mempool/octeontx: not in enabled drivers build config 00:01:48.517 mempool/stack: not in enabled drivers build config 00:01:48.517 dma/cnxk: not in enabled drivers build config 00:01:48.517 dma/dpaa: not in enabled drivers build config 00:01:48.517 dma/dpaa2: not in enabled drivers build config 00:01:48.517 dma/hisilicon: not in enabled drivers build config 00:01:48.517 dma/idxd: not in enabled drivers build config 00:01:48.517 dma/ioat: not in enabled drivers build config 00:01:48.517 dma/skeleton: not in enabled drivers build config 00:01:48.517 net/af_packet: not in enabled drivers build config 00:01:48.517 net/af_xdp: not in enabled drivers build config 00:01:48.517 net/ark: not in enabled drivers build config 00:01:48.517 net/atlantic: not in enabled drivers build config 00:01:48.517 net/avp: not in enabled drivers build config 00:01:48.517 net/axgbe: not in enabled drivers build config 00:01:48.517 net/bnx2x: not in enabled drivers build config 00:01:48.517 net/bnxt: not in enabled drivers build config 00:01:48.517 net/bonding: not in enabled drivers build config 00:01:48.517 net/cnxk: not in enabled drivers build config 00:01:48.517 net/cpfl: not in enabled drivers build config 00:01:48.517 net/cxgbe: not in enabled drivers build config 00:01:48.517 net/dpaa: not in enabled drivers build config 00:01:48.517 net/dpaa2: not in enabled drivers build config 00:01:48.517 net/e1000: not in enabled drivers build config 00:01:48.517 net/ena: not in enabled drivers build config 00:01:48.517 net/enetc: not in enabled drivers build config 00:01:48.517 net/enetfec: not in enabled drivers build config 00:01:48.517 net/enic: not in enabled drivers build config 00:01:48.517 net/failsafe: not in enabled drivers build config 00:01:48.517 net/fm10k: not in enabled drivers build config 00:01:48.517 net/gve: not in enabled drivers build config 00:01:48.517 net/hinic: not in enabled drivers build config 00:01:48.517 net/hns3: not in enabled drivers build config 00:01:48.517 net/i40e: not in enabled drivers build config 00:01:48.517 net/iavf: not in enabled drivers build config 00:01:48.517 net/ice: not in enabled drivers build config 00:01:48.517 net/idpf: not in enabled drivers build config 00:01:48.517 net/igc: not in enabled drivers build config 00:01:48.517 net/ionic: not in enabled drivers build config 00:01:48.517 net/ipn3ke: not in enabled drivers build config 00:01:48.517 net/ixgbe: not in enabled drivers build config 00:01:48.517 net/mana: not in enabled drivers build config 00:01:48.517 net/memif: not in enabled drivers build config 00:01:48.517 net/mlx4: not in enabled drivers build config 00:01:48.517 net/mlx5: not in enabled drivers build config 00:01:48.517 net/mvneta: not in enabled drivers build config 00:01:48.517 net/mvpp2: not in enabled drivers build config 00:01:48.517 net/netvsc: not in enabled drivers build config 00:01:48.517 net/nfb: not in enabled drivers build config 00:01:48.517 net/nfp: not in enabled drivers build config 00:01:48.517 net/ngbe: not in enabled drivers build config 00:01:48.517 net/null: not in enabled drivers build config 00:01:48.517 net/octeontx: not in enabled drivers build config 00:01:48.517 net/octeon_ep: not in enabled drivers build config 00:01:48.517 net/pcap: not in enabled drivers build config 00:01:48.517 net/pfe: not in enabled drivers build config 00:01:48.517 net/qede: not in enabled drivers build config 00:01:48.517 net/ring: not in enabled drivers build config 00:01:48.517 net/sfc: not in enabled drivers build config 00:01:48.517 net/softnic: not in enabled drivers build config 00:01:48.517 net/tap: not in enabled drivers build config 00:01:48.517 net/thunderx: not in enabled drivers build config 00:01:48.517 net/txgbe: not in enabled drivers build config 00:01:48.517 net/vdev_netvsc: not in enabled drivers build config 00:01:48.517 net/vhost: not in enabled drivers build config 00:01:48.517 net/virtio: not in enabled drivers build config 00:01:48.517 net/vmxnet3: not in enabled drivers build config 00:01:48.517 raw/*: missing internal dependency, "rawdev" 00:01:48.517 crypto/armv8: not in enabled drivers build config 00:01:48.517 crypto/bcmfs: not in enabled drivers build config 00:01:48.517 crypto/caam_jr: not in enabled drivers build config 00:01:48.517 crypto/ccp: not in enabled drivers build config 00:01:48.517 crypto/cnxk: not in enabled drivers build config 00:01:48.517 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.517 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.517 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.517 crypto/mlx5: not in enabled drivers build config 00:01:48.517 crypto/mvsam: not in enabled drivers build config 00:01:48.517 crypto/nitrox: not in enabled drivers build config 00:01:48.517 crypto/null: not in enabled drivers build config 00:01:48.518 crypto/octeontx: not in enabled drivers build config 00:01:48.518 crypto/openssl: not in enabled drivers build config 00:01:48.518 crypto/scheduler: not in enabled drivers build config 00:01:48.518 crypto/uadk: not in enabled drivers build config 00:01:48.518 crypto/virtio: not in enabled drivers build config 00:01:48.518 compress/isal: not in enabled drivers build config 00:01:48.518 compress/mlx5: not in enabled drivers build config 00:01:48.518 compress/octeontx: not in enabled drivers build config 00:01:48.518 compress/zlib: not in enabled drivers build config 00:01:48.518 regex/*: missing internal dependency, "regexdev" 00:01:48.518 ml/*: missing internal dependency, "mldev" 00:01:48.518 vdpa/ifc: not in enabled drivers build config 00:01:48.518 vdpa/mlx5: not in enabled drivers build config 00:01:48.518 vdpa/nfp: not in enabled drivers build config 00:01:48.518 vdpa/sfc: not in enabled drivers build config 00:01:48.518 event/*: missing internal dependency, "eventdev" 00:01:48.518 baseband/*: missing internal dependency, "bbdev" 00:01:48.518 gpu/*: missing internal dependency, "gpudev" 00:01:48.518 00:01:48.518 00:01:48.778 Build targets in project: 84 00:01:48.778 00:01:48.778 DPDK 23.11.0 00:01:48.778 00:01:48.778 User defined options 00:01:48.778 buildtype : debug 00:01:48.778 default_library : shared 00:01:48.778 libdir : lib 00:01:48.778 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:48.779 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:48.779 c_link_args : 00:01:48.779 cpu_instruction_set: native 00:01:48.779 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:48.779 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:01:48.779 enable_docs : false 00:01:48.779 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:48.779 enable_kmods : false 00:01:48.779 tests : false 00:01:48.779 00:01:48.779 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.352 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:49.618 [1/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:49.618 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:49.618 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.618 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:49.618 [5/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:49.618 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:49.618 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.618 [8/264] Linking static target lib/librte_kvargs.a 00:01:49.618 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:49.618 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:49.618 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:49.618 [12/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:49.618 [13/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:49.618 [14/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:49.618 [15/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.618 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.618 [17/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:49.618 [18/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:49.618 [19/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:49.618 [20/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:49.618 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:49.618 [22/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:49.618 [23/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:49.618 [24/264] Linking static target lib/librte_log.a 00:01:49.618 [25/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:49.618 [26/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.618 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:49.618 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:49.618 [29/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:49.618 [30/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:49.618 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:49.618 [32/264] Linking static target lib/librte_pci.a 00:01:49.618 [33/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:49.618 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:49.878 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:49.878 [36/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:49.878 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:49.878 [38/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:49.878 [39/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.878 [40/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:49.878 [41/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:49.878 [42/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.878 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.878 [44/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:49.878 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:49.878 [46/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.878 [47/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:49.878 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:49.878 [49/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.878 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:49.878 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:49.878 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:49.878 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:49.878 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:50.138 [55/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:50.138 [56/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:50.138 [57/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:50.138 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.138 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:50.138 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:50.138 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:50.138 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.138 [63/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.138 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.138 [65/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:50.138 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:50.138 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.138 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:50.138 [69/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:50.138 [70/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:50.138 [71/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:50.138 [72/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:50.138 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.138 [74/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:50.138 [75/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:50.138 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:50.138 [77/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:50.138 [78/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:50.138 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:50.138 [80/264] Linking static target lib/librte_rcu.a 00:01:50.138 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:50.138 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:50.138 [83/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:50.138 [84/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:50.138 [85/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:50.138 [86/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.138 [87/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:50.138 [88/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:50.138 [89/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:50.138 [90/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:50.138 [91/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:50.138 [92/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:50.138 [93/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:50.138 [94/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.138 [95/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:50.138 [96/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:50.138 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:50.138 [98/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:50.138 [99/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:50.138 [100/264] Linking static target lib/librte_meter.a 00:01:50.138 [101/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:50.138 [102/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:50.138 [103/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:50.138 [104/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:50.138 [105/264] Linking static target lib/librte_ring.a 00:01:50.138 [106/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:50.138 [107/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:50.138 [108/264] Linking static target lib/librte_cmdline.a 00:01:50.138 [109/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:50.138 [110/264] Linking static target lib/librte_telemetry.a 00:01:50.138 [111/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:50.138 [112/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:50.138 [113/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:50.138 [114/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:50.138 [115/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:50.138 [116/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:50.138 [117/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:50.138 [118/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:50.138 [119/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:50.138 [120/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:50.138 [121/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:50.138 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:50.138 [123/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:50.138 [124/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:50.138 [125/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:50.138 [126/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:50.138 [127/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:50.138 [128/264] Linking static target lib/librte_timer.a 00:01:50.138 [129/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.138 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:50.138 [131/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:50.138 [132/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:50.138 [133/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:50.138 [134/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:50.138 [135/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.138 [136/264] Linking static target lib/librte_dmadev.a 00:01:50.138 [137/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:50.138 [138/264] Linking static target lib/librte_security.a 00:01:50.138 [139/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:50.138 [140/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:50.138 [141/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:50.138 [142/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:50.138 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:50.138 [144/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:50.138 [145/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:50.138 [146/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:50.138 [147/264] Linking target lib/librte_log.so.24.0 00:01:50.138 [148/264] Linking static target lib/librte_mempool.a 00:01:50.138 [149/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:50.400 [150/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:50.400 [151/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:50.400 [152/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:50.400 [153/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:50.400 [154/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:50.400 [155/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:50.400 [156/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:50.400 [157/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:50.400 [158/264] Linking static target lib/librte_net.a 00:01:50.400 [159/264] Linking static target lib/librte_compressdev.a 00:01:50.400 [160/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.400 [161/264] Linking static target lib/librte_power.a 00:01:50.400 [162/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.400 [163/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:50.400 [164/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.400 [165/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:50.400 [166/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.400 [167/264] Linking static target lib/librte_eal.a 00:01:50.400 [168/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.400 [169/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:50.400 [170/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:50.400 [171/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:50.400 [172/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:50.400 [173/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.400 [174/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:50.400 [175/264] Linking static target lib/librte_reorder.a 00:01:50.400 [176/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.400 [177/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:50.400 [178/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.400 [179/264] Linking static target lib/librte_mbuf.a 00:01:50.400 [180/264] Linking target lib/librte_kvargs.so.24.0 00:01:50.400 [181/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.400 [182/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.400 [183/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.400 [184/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.400 [185/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.400 [186/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.400 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.400 [188/264] Linking static target drivers/librte_bus_vdev.a 00:01:50.400 [189/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.400 [190/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.400 [191/264] Linking static target lib/librte_hash.a 00:01:50.400 [192/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.400 [193/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.661 [194/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:50.661 [195/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.661 [196/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.661 [197/264] Linking static target drivers/librte_mempool_ring.a 00:01:50.661 [198/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.661 [199/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.661 [200/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.661 [201/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.661 [202/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.661 [203/264] Linking static target drivers/librte_bus_pci.a 00:01:50.661 [204/264] Linking static target lib/librte_cryptodev.a 00:01:50.661 [205/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.661 [206/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:50.661 [207/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.661 [208/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.922 [209/264] Linking target lib/librte_telemetry.so.24.0 00:01:50.922 [210/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.922 [211/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.922 [212/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.922 [213/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:51.183 [214/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.183 [215/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:51.183 [216/264] Linking static target lib/librte_ethdev.a 00:01:51.183 [217/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:51.183 [218/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.183 [219/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.445 [220/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.445 [221/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.445 [222/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.445 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.019 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:52.019 [225/264] Linking static target lib/librte_vhost.a 00:01:52.966 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.354 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.947 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.337 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.337 [230/264] Linking target lib/librte_eal.so.24.0 00:02:02.337 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:02.337 [232/264] Linking target lib/librte_ring.so.24.0 00:02:02.337 [233/264] Linking target lib/librte_meter.so.24.0 00:02:02.337 [234/264] Linking target lib/librte_pci.so.24.0 00:02:02.337 [235/264] Linking target lib/librte_timer.so.24.0 00:02:02.337 [236/264] Linking target lib/librte_dmadev.so.24.0 00:02:02.337 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:02.598 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:02.598 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:02.598 [240/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:02.598 [241/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:02.598 [242/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:02.598 [243/264] Linking target lib/librte_rcu.so.24.0 00:02:02.598 [244/264] Linking target lib/librte_mempool.so.24.0 00:02:02.598 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:02.598 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:02.598 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:02.860 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:02.860 [249/264] Linking target lib/librte_mbuf.so.24.0 00:02:02.860 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:02.860 [251/264] Linking target lib/librte_net.so.24.0 00:02:02.860 [252/264] Linking target lib/librte_compressdev.so.24.0 00:02:02.860 [253/264] Linking target lib/librte_reorder.so.24.0 00:02:02.860 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:02:03.121 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:03.121 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:03.121 [257/264] Linking target lib/librte_hash.so.24.0 00:02:03.121 [258/264] Linking target lib/librte_cmdline.so.24.0 00:02:03.121 [259/264] Linking target lib/librte_security.so.24.0 00:02:03.121 [260/264] Linking target lib/librte_ethdev.so.24.0 00:02:03.383 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:03.383 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:03.383 [263/264] Linking target lib/librte_power.so.24.0 00:02:03.383 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:03.383 INFO: autodetecting backend as ninja 00:02:03.383 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:04.328 CC lib/ut_mock/mock.o 00:02:04.328 CC lib/log/log.o 00:02:04.328 CC lib/log/log_flags.o 00:02:04.328 CC lib/log/log_deprecated.o 00:02:04.328 CC lib/ut/ut.o 00:02:04.590 LIB libspdk_ut_mock.a 00:02:04.590 LIB libspdk_ut.a 00:02:04.590 LIB libspdk_log.a 00:02:04.590 SO libspdk_ut_mock.so.5.0 00:02:04.590 SO libspdk_ut.so.1.0 00:02:04.590 SO libspdk_log.so.6.1 00:02:04.590 SYMLINK libspdk_ut_mock.so 00:02:04.590 SYMLINK libspdk_ut.so 00:02:04.590 SYMLINK libspdk_log.so 00:02:04.853 CC lib/util/base64.o 00:02:04.853 CC lib/util/bit_array.o 00:02:04.853 CC lib/util/cpuset.o 00:02:04.853 CC lib/util/crc32.o 00:02:04.853 CC lib/util/crc16.o 00:02:04.853 CC lib/util/crc32c.o 00:02:04.853 CC lib/util/crc32_ieee.o 00:02:04.853 CC lib/util/crc64.o 00:02:04.853 CC lib/dma/dma.o 00:02:04.853 CC lib/util/dif.o 00:02:04.853 CC lib/util/fd.o 00:02:04.853 CC lib/util/file.o 00:02:04.853 CC lib/util/iov.o 00:02:04.853 CC lib/util/hexlify.o 00:02:04.853 CC lib/util/math.o 00:02:04.853 CC lib/util/pipe.o 00:02:04.853 CC lib/util/strerror_tls.o 00:02:04.853 CC lib/ioat/ioat.o 00:02:04.853 CC lib/util/string.o 00:02:04.853 CXX lib/trace_parser/trace.o 00:02:04.853 CC lib/util/uuid.o 00:02:04.853 CC lib/util/fd_group.o 00:02:04.853 CC lib/util/xor.o 00:02:04.853 CC lib/util/zipf.o 00:02:05.116 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.116 CC lib/vfio_user/host/vfio_user.o 00:02:05.116 LIB libspdk_dma.a 00:02:05.116 SO libspdk_dma.so.3.0 00:02:05.116 SYMLINK libspdk_dma.so 00:02:05.116 LIB libspdk_ioat.a 00:02:05.377 SO libspdk_ioat.so.6.0 00:02:05.377 LIB libspdk_vfio_user.a 00:02:05.377 SYMLINK libspdk_ioat.so 00:02:05.377 SO libspdk_vfio_user.so.4.0 00:02:05.377 SYMLINK libspdk_vfio_user.so 00:02:05.377 LIB libspdk_util.a 00:02:05.638 SO libspdk_util.so.8.0 00:02:05.638 SYMLINK libspdk_util.so 00:02:05.899 CC lib/json/json_parse.o 00:02:05.899 CC lib/json/json_util.o 00:02:05.899 CC lib/json/json_write.o 00:02:05.899 CC lib/vmd/vmd.o 00:02:05.899 CC lib/conf/conf.o 00:02:05.899 CC lib/vmd/led.o 00:02:05.899 CC lib/env_dpdk/env.o 00:02:05.899 CC lib/env_dpdk/memory.o 00:02:05.899 CC lib/rdma/common.o 00:02:05.899 CC lib/env_dpdk/pci.o 00:02:05.899 CC lib/rdma/rdma_verbs.o 00:02:05.899 CC lib/env_dpdk/threads.o 00:02:05.899 CC lib/env_dpdk/init.o 00:02:05.899 CC lib/idxd/idxd.o 00:02:05.899 CC lib/idxd/idxd_user.o 00:02:05.899 CC lib/env_dpdk/pci_ioat.o 00:02:05.899 CC lib/idxd/idxd_kernel.o 00:02:05.899 CC lib/env_dpdk/pci_virtio.o 00:02:05.899 CC lib/env_dpdk/pci_vmd.o 00:02:05.899 CC lib/env_dpdk/pci_idxd.o 00:02:05.899 CC lib/env_dpdk/pci_event.o 00:02:05.899 CC lib/env_dpdk/sigbus_handler.o 00:02:05.899 CC lib/env_dpdk/pci_dpdk.o 00:02:05.899 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:05.899 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.161 LIB libspdk_conf.a 00:02:06.161 SO libspdk_conf.so.5.0 00:02:06.161 LIB libspdk_rdma.a 00:02:06.161 LIB libspdk_json.a 00:02:06.161 SO libspdk_rdma.so.5.0 00:02:06.161 SO libspdk_json.so.5.1 00:02:06.161 SYMLINK libspdk_conf.so 00:02:06.161 SYMLINK libspdk_rdma.so 00:02:06.423 SYMLINK libspdk_json.so 00:02:06.423 LIB libspdk_trace_parser.a 00:02:06.423 LIB libspdk_idxd.a 00:02:06.423 SO libspdk_trace_parser.so.4.0 00:02:06.423 SO libspdk_idxd.so.11.0 00:02:06.423 LIB libspdk_vmd.a 00:02:06.423 CC lib/jsonrpc/jsonrpc_server.o 00:02:06.423 SYMLINK libspdk_idxd.so 00:02:06.423 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:06.423 CC lib/jsonrpc/jsonrpc_client.o 00:02:06.423 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.423 SYMLINK libspdk_trace_parser.so 00:02:06.423 SO libspdk_vmd.so.5.0 00:02:06.685 SYMLINK libspdk_vmd.so 00:02:06.685 LIB libspdk_jsonrpc.a 00:02:06.946 SO libspdk_jsonrpc.so.5.1 00:02:06.946 SYMLINK libspdk_jsonrpc.so 00:02:07.208 CC lib/rpc/rpc.o 00:02:07.208 LIB libspdk_env_dpdk.a 00:02:07.208 SO libspdk_env_dpdk.so.13.0 00:02:07.208 SYMLINK libspdk_env_dpdk.so 00:02:07.208 LIB libspdk_rpc.a 00:02:07.470 SO libspdk_rpc.so.5.0 00:02:07.470 SYMLINK libspdk_rpc.so 00:02:07.732 CC lib/sock/sock.o 00:02:07.732 CC lib/trace/trace.o 00:02:07.732 CC lib/trace/trace_flags.o 00:02:07.732 CC lib/sock/sock_rpc.o 00:02:07.732 CC lib/trace/trace_rpc.o 00:02:07.732 CC lib/notify/notify.o 00:02:07.732 CC lib/notify/notify_rpc.o 00:02:07.732 LIB libspdk_notify.a 00:02:07.995 SO libspdk_notify.so.5.0 00:02:07.995 LIB libspdk_trace.a 00:02:07.995 SYMLINK libspdk_notify.so 00:02:07.995 SO libspdk_trace.so.9.0 00:02:07.995 SYMLINK libspdk_trace.so 00:02:07.995 LIB libspdk_sock.a 00:02:07.995 SO libspdk_sock.so.8.0 00:02:08.257 SYMLINK libspdk_sock.so 00:02:08.257 CC lib/thread/thread.o 00:02:08.257 CC lib/thread/iobuf.o 00:02:08.257 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.257 CC lib/nvme/nvme_ctrlr.o 00:02:08.257 CC lib/nvme/nvme_fabric.o 00:02:08.257 CC lib/nvme/nvme_ns_cmd.o 00:02:08.257 CC lib/nvme/nvme_ns.o 00:02:08.257 CC lib/nvme/nvme_pcie_common.o 00:02:08.257 CC lib/nvme/nvme_pcie.o 00:02:08.257 CC lib/nvme/nvme_qpair.o 00:02:08.257 CC lib/nvme/nvme.o 00:02:08.257 CC lib/nvme/nvme_quirks.o 00:02:08.257 CC lib/nvme/nvme_transport.o 00:02:08.519 CC lib/nvme/nvme_discovery.o 00:02:08.519 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.519 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.519 CC lib/nvme/nvme_tcp.o 00:02:08.519 CC lib/nvme/nvme_opal.o 00:02:08.519 CC lib/nvme/nvme_io_msg.o 00:02:08.519 CC lib/nvme/nvme_poll_group.o 00:02:08.519 CC lib/nvme/nvme_zns.o 00:02:08.519 CC lib/nvme/nvme_cuse.o 00:02:08.519 CC lib/nvme/nvme_vfio_user.o 00:02:08.519 CC lib/nvme/nvme_rdma.o 00:02:09.908 LIB libspdk_thread.a 00:02:09.908 SO libspdk_thread.so.9.0 00:02:09.908 SYMLINK libspdk_thread.so 00:02:09.908 CC lib/blob/blobstore.o 00:02:09.908 CC lib/blob/request.o 00:02:09.908 CC lib/accel/accel.o 00:02:09.908 CC lib/init/json_config.o 00:02:09.908 CC lib/blob/zeroes.o 00:02:09.908 CC lib/accel/accel_rpc.o 00:02:09.908 CC lib/init/subsystem.o 00:02:09.908 CC lib/blob/blob_bs_dev.o 00:02:09.908 CC lib/virtio/virtio.o 00:02:09.908 CC lib/accel/accel_sw.o 00:02:09.908 CC lib/init/subsystem_rpc.o 00:02:09.908 CC lib/virtio/virtio_vhost_user.o 00:02:09.908 CC lib/init/rpc.o 00:02:09.908 CC lib/virtio/virtio_vfio_user.o 00:02:09.908 CC lib/virtio/virtio_pci.o 00:02:10.170 LIB libspdk_init.a 00:02:10.170 LIB libspdk_nvme.a 00:02:10.170 SO libspdk_init.so.4.0 00:02:10.432 LIB libspdk_virtio.a 00:02:10.432 SO libspdk_virtio.so.6.0 00:02:10.432 SYMLINK libspdk_init.so 00:02:10.432 SO libspdk_nvme.so.12.0 00:02:10.432 SYMLINK libspdk_virtio.so 00:02:10.695 CC lib/event/app.o 00:02:10.695 CC lib/event/reactor.o 00:02:10.695 CC lib/event/log_rpc.o 00:02:10.695 CC lib/event/app_rpc.o 00:02:10.695 CC lib/event/scheduler_static.o 00:02:10.695 SYMLINK libspdk_nvme.so 00:02:10.957 LIB libspdk_accel.a 00:02:10.957 LIB libspdk_event.a 00:02:10.957 SO libspdk_accel.so.14.0 00:02:10.957 SO libspdk_event.so.12.0 00:02:10.957 SYMLINK libspdk_accel.so 00:02:11.219 SYMLINK libspdk_event.so 00:02:11.219 CC lib/bdev/bdev.o 00:02:11.219 CC lib/bdev/bdev_rpc.o 00:02:11.219 CC lib/bdev/bdev_zone.o 00:02:11.219 CC lib/bdev/part.o 00:02:11.219 CC lib/bdev/scsi_nvme.o 00:02:12.607 LIB libspdk_blob.a 00:02:12.607 SO libspdk_blob.so.10.1 00:02:12.607 SYMLINK libspdk_blob.so 00:02:12.868 CC lib/blobfs/blobfs.o 00:02:12.868 CC lib/lvol/lvol.o 00:02:12.868 CC lib/blobfs/tree.o 00:02:13.439 LIB libspdk_bdev.a 00:02:13.439 SO libspdk_bdev.so.14.0 00:02:13.439 SYMLINK libspdk_bdev.so 00:02:13.439 LIB libspdk_blobfs.a 00:02:13.440 SO libspdk_blobfs.so.9.0 00:02:13.440 LIB libspdk_lvol.a 00:02:13.700 SO libspdk_lvol.so.9.1 00:02:13.700 SYMLINK libspdk_blobfs.so 00:02:13.700 CC lib/nvmf/ctrlr.o 00:02:13.700 CC lib/nvmf/ctrlr_discovery.o 00:02:13.700 CC lib/nvmf/ctrlr_bdev.o 00:02:13.700 CC lib/nvmf/subsystem.o 00:02:13.700 CC lib/nvmf/nvmf.o 00:02:13.700 CC lib/nvmf/transport.o 00:02:13.700 CC lib/scsi/dev.o 00:02:13.700 CC lib/nvmf/nvmf_rpc.o 00:02:13.700 CC lib/scsi/lun.o 00:02:13.700 CC lib/ftl/ftl_core.o 00:02:13.700 CC lib/nbd/nbd.o 00:02:13.700 CC lib/nvmf/tcp.o 00:02:13.700 CC lib/scsi/port.o 00:02:13.700 CC lib/ublk/ublk.o 00:02:13.700 CC lib/nbd/nbd_rpc.o 00:02:13.700 CC lib/ftl/ftl_init.o 00:02:13.700 CC lib/ublk/ublk_rpc.o 00:02:13.700 CC lib/scsi/scsi.o 00:02:13.700 CC lib/nvmf/rdma.o 00:02:13.700 CC lib/ftl/ftl_layout.o 00:02:13.700 CC lib/scsi/scsi_bdev.o 00:02:13.700 CC lib/ftl/ftl_debug.o 00:02:13.700 CC lib/scsi/scsi_pr.o 00:02:13.700 CC lib/scsi/scsi_rpc.o 00:02:13.700 CC lib/ftl/ftl_io.o 00:02:13.700 CC lib/scsi/task.o 00:02:13.700 CC lib/ftl/ftl_sb.o 00:02:13.700 CC lib/ftl/ftl_l2p.o 00:02:13.700 CC lib/ftl/ftl_l2p_flat.o 00:02:13.700 CC lib/ftl/ftl_nv_cache.o 00:02:13.700 CC lib/ftl/ftl_band.o 00:02:13.700 CC lib/ftl/ftl_band_ops.o 00:02:13.700 CC lib/ftl/ftl_writer.o 00:02:13.700 CC lib/ftl/ftl_rq.o 00:02:13.700 CC lib/ftl/ftl_reloc.o 00:02:13.700 SYMLINK libspdk_lvol.so 00:02:13.700 CC lib/ftl/ftl_l2p_cache.o 00:02:13.700 CC lib/ftl/ftl_p2l.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.700 CC lib/ftl/utils/ftl_conf.o 00:02:13.700 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.700 CC lib/ftl/utils/ftl_md.o 00:02:13.700 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.700 CC lib/ftl/utils/ftl_mempool.o 00:02:13.700 CC lib/ftl/utils/ftl_property.o 00:02:13.700 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.700 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.700 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.700 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.700 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.700 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.700 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.700 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.700 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.700 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.700 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.700 CC lib/ftl/base/ftl_base_dev.o 00:02:13.700 CC lib/ftl/ftl_trace.o 00:02:14.273 LIB libspdk_nbd.a 00:02:14.273 SO libspdk_nbd.so.6.0 00:02:14.273 LIB libspdk_scsi.a 00:02:14.273 SO libspdk_scsi.so.8.0 00:02:14.273 SYMLINK libspdk_nbd.so 00:02:14.273 LIB libspdk_ublk.a 00:02:14.273 SYMLINK libspdk_scsi.so 00:02:14.273 SO libspdk_ublk.so.2.0 00:02:14.273 SYMLINK libspdk_ublk.so 00:02:14.537 LIB libspdk_ftl.a 00:02:14.537 CC lib/iscsi/conn.o 00:02:14.537 CC lib/iscsi/init_grp.o 00:02:14.537 CC lib/iscsi/iscsi.o 00:02:14.537 CC lib/iscsi/md5.o 00:02:14.537 CC lib/iscsi/param.o 00:02:14.537 CC lib/vhost/vhost.o 00:02:14.537 CC lib/iscsi/portal_grp.o 00:02:14.537 CC lib/vhost/vhost_rpc.o 00:02:14.537 CC lib/iscsi/iscsi_subsystem.o 00:02:14.537 CC lib/iscsi/tgt_node.o 00:02:14.537 CC lib/vhost/vhost_scsi.o 00:02:14.537 CC lib/vhost/vhost_blk.o 00:02:14.537 CC lib/iscsi/iscsi_rpc.o 00:02:14.537 CC lib/vhost/rte_vhost_user.o 00:02:14.537 CC lib/iscsi/task.o 00:02:14.537 SO libspdk_ftl.so.8.0 00:02:14.800 SYMLINK libspdk_ftl.so 00:02:15.374 LIB libspdk_nvmf.a 00:02:15.374 SO libspdk_nvmf.so.17.0 00:02:15.374 LIB libspdk_vhost.a 00:02:15.636 SO libspdk_vhost.so.7.1 00:02:15.636 SYMLINK libspdk_nvmf.so 00:02:15.636 SYMLINK libspdk_vhost.so 00:02:15.636 LIB libspdk_iscsi.a 00:02:15.898 SO libspdk_iscsi.so.7.0 00:02:15.898 SYMLINK libspdk_iscsi.so 00:02:16.471 CC module/env_dpdk/env_dpdk_rpc.o 00:02:16.471 CC module/accel/ioat/accel_ioat.o 00:02:16.471 CC module/blob/bdev/blob_bdev.o 00:02:16.471 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.471 CC module/accel/error/accel_error.o 00:02:16.471 CC module/accel/error/accel_error_rpc.o 00:02:16.471 CC module/accel/dsa/accel_dsa_rpc.o 00:02:16.471 CC module/accel/dsa/accel_dsa.o 00:02:16.471 CC module/accel/iaa/accel_iaa.o 00:02:16.471 CC module/sock/posix/posix.o 00:02:16.471 CC module/accel/iaa/accel_iaa_rpc.o 00:02:16.471 CC module/scheduler/gscheduler/gscheduler.o 00:02:16.471 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:16.471 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:16.471 LIB libspdk_env_dpdk_rpc.a 00:02:16.471 SO libspdk_env_dpdk_rpc.so.5.0 00:02:16.471 SYMLINK libspdk_env_dpdk_rpc.so 00:02:16.471 LIB libspdk_scheduler_gscheduler.a 00:02:16.732 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.732 LIB libspdk_accel_error.a 00:02:16.732 SO libspdk_scheduler_gscheduler.so.3.0 00:02:16.732 LIB libspdk_accel_ioat.a 00:02:16.732 LIB libspdk_scheduler_dynamic.a 00:02:16.732 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:16.732 SO libspdk_accel_error.so.1.0 00:02:16.732 LIB libspdk_accel_iaa.a 00:02:16.732 SO libspdk_accel_ioat.so.5.0 00:02:16.732 LIB libspdk_accel_dsa.a 00:02:16.732 SO libspdk_scheduler_dynamic.so.3.0 00:02:16.732 LIB libspdk_blob_bdev.a 00:02:16.732 SYMLINK libspdk_scheduler_gscheduler.so 00:02:16.732 SO libspdk_accel_iaa.so.2.0 00:02:16.732 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:16.732 SO libspdk_accel_dsa.so.4.0 00:02:16.732 SYMLINK libspdk_accel_error.so 00:02:16.732 SO libspdk_blob_bdev.so.10.1 00:02:16.732 SYMLINK libspdk_scheduler_dynamic.so 00:02:16.732 SYMLINK libspdk_accel_ioat.so 00:02:16.732 SYMLINK libspdk_accel_iaa.so 00:02:16.732 SYMLINK libspdk_blob_bdev.so 00:02:16.732 SYMLINK libspdk_accel_dsa.so 00:02:17.026 LIB libspdk_sock_posix.a 00:02:17.026 SO libspdk_sock_posix.so.5.0 00:02:17.288 CC module/bdev/lvol/vbdev_lvol.o 00:02:17.288 CC module/bdev/delay/vbdev_delay.o 00:02:17.288 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:17.288 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:17.288 CC module/bdev/malloc/bdev_malloc.o 00:02:17.288 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:17.288 CC module/bdev/nvme/bdev_nvme.o 00:02:17.288 CC module/bdev/error/vbdev_error.o 00:02:17.288 CC module/bdev/gpt/gpt.o 00:02:17.288 CC module/bdev/gpt/vbdev_gpt.o 00:02:17.288 CC module/bdev/error/vbdev_error_rpc.o 00:02:17.288 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:17.288 CC module/bdev/nvme/nvme_rpc.o 00:02:17.288 CC module/bdev/nvme/bdev_mdns_client.o 00:02:17.288 CC module/bdev/nvme/vbdev_opal.o 00:02:17.288 CC module/bdev/aio/bdev_aio.o 00:02:17.288 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:17.288 CC module/bdev/raid/bdev_raid.o 00:02:17.288 CC module/blobfs/bdev/blobfs_bdev.o 00:02:17.288 CC module/bdev/split/vbdev_split.o 00:02:17.288 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:17.288 CC module/bdev/aio/bdev_aio_rpc.o 00:02:17.288 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:17.288 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:17.288 CC module/bdev/raid/bdev_raid_rpc.o 00:02:17.288 CC module/bdev/split/vbdev_split_rpc.o 00:02:17.288 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:17.288 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:17.288 CC module/bdev/null/bdev_null.o 00:02:17.288 CC module/bdev/iscsi/bdev_iscsi.o 00:02:17.288 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:17.288 CC module/bdev/raid/bdev_raid_sb.o 00:02:17.288 CC module/bdev/ftl/bdev_ftl.o 00:02:17.288 CC module/bdev/null/bdev_null_rpc.o 00:02:17.288 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:17.288 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:17.288 CC module/bdev/passthru/vbdev_passthru.o 00:02:17.288 CC module/bdev/raid/raid0.o 00:02:17.288 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:17.288 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:17.288 CC module/bdev/raid/raid1.o 00:02:17.288 CC module/bdev/raid/concat.o 00:02:17.288 SYMLINK libspdk_sock_posix.so 00:02:17.550 LIB libspdk_blobfs_bdev.a 00:02:17.550 SO libspdk_blobfs_bdev.so.5.0 00:02:17.550 LIB libspdk_bdev_split.a 00:02:17.550 LIB libspdk_bdev_gpt.a 00:02:17.550 LIB libspdk_bdev_null.a 00:02:17.550 LIB libspdk_bdev_error.a 00:02:17.550 SO libspdk_bdev_split.so.5.0 00:02:17.550 SYMLINK libspdk_blobfs_bdev.so 00:02:17.550 LIB libspdk_bdev_ftl.a 00:02:17.550 LIB libspdk_bdev_passthru.a 00:02:17.550 LIB libspdk_bdev_malloc.a 00:02:17.550 SO libspdk_bdev_gpt.so.5.0 00:02:17.550 SO libspdk_bdev_error.so.5.0 00:02:17.550 LIB libspdk_bdev_aio.a 00:02:17.550 SO libspdk_bdev_ftl.so.5.0 00:02:17.550 SO libspdk_bdev_null.so.5.0 00:02:17.550 LIB libspdk_bdev_zone_block.a 00:02:17.550 SO libspdk_bdev_passthru.so.5.0 00:02:17.550 LIB libspdk_bdev_delay.a 00:02:17.550 SO libspdk_bdev_malloc.so.5.0 00:02:17.550 SYMLINK libspdk_bdev_split.so 00:02:17.550 SO libspdk_bdev_aio.so.5.0 00:02:17.550 LIB libspdk_bdev_iscsi.a 00:02:17.550 SO libspdk_bdev_zone_block.so.5.0 00:02:17.550 SYMLINK libspdk_bdev_ftl.so 00:02:17.550 SYMLINK libspdk_bdev_gpt.so 00:02:17.550 SYMLINK libspdk_bdev_error.so 00:02:17.550 SO libspdk_bdev_delay.so.5.0 00:02:17.550 SYMLINK libspdk_bdev_null.so 00:02:17.811 SO libspdk_bdev_iscsi.so.5.0 00:02:17.811 SYMLINK libspdk_bdev_aio.so 00:02:17.811 SYMLINK libspdk_bdev_passthru.so 00:02:17.811 SYMLINK libspdk_bdev_malloc.so 00:02:17.811 SYMLINK libspdk_bdev_zone_block.so 00:02:17.811 SYMLINK libspdk_bdev_delay.so 00:02:17.811 LIB libspdk_bdev_lvol.a 00:02:17.811 SYMLINK libspdk_bdev_iscsi.so 00:02:17.811 LIB libspdk_bdev_virtio.a 00:02:17.811 SO libspdk_bdev_lvol.so.5.0 00:02:17.811 SO libspdk_bdev_virtio.so.5.0 00:02:17.811 SYMLINK libspdk_bdev_lvol.so 00:02:17.811 SYMLINK libspdk_bdev_virtio.so 00:02:18.072 LIB libspdk_bdev_raid.a 00:02:18.072 SO libspdk_bdev_raid.so.5.0 00:02:18.072 SYMLINK libspdk_bdev_raid.so 00:02:19.018 LIB libspdk_bdev_nvme.a 00:02:19.280 SO libspdk_bdev_nvme.so.6.0 00:02:19.280 SYMLINK libspdk_bdev_nvme.so 00:02:19.854 CC module/event/subsystems/vmd/vmd.o 00:02:19.854 CC module/event/subsystems/scheduler/scheduler.o 00:02:19.854 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:19.854 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:19.854 CC module/event/subsystems/sock/sock.o 00:02:19.854 CC module/event/subsystems/iobuf/iobuf.o 00:02:19.854 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:19.854 LIB libspdk_event_vhost_blk.a 00:02:19.854 LIB libspdk_event_sock.a 00:02:19.854 LIB libspdk_event_vmd.a 00:02:19.854 LIB libspdk_event_scheduler.a 00:02:19.854 LIB libspdk_event_iobuf.a 00:02:19.854 SO libspdk_event_vhost_blk.so.2.0 00:02:19.854 SO libspdk_event_sock.so.4.0 00:02:20.116 SO libspdk_event_scheduler.so.3.0 00:02:20.116 SO libspdk_event_vmd.so.5.0 00:02:20.116 SO libspdk_event_iobuf.so.2.0 00:02:20.116 SYMLINK libspdk_event_vhost_blk.so 00:02:20.116 SYMLINK libspdk_event_sock.so 00:02:20.116 SYMLINK libspdk_event_scheduler.so 00:02:20.116 SYMLINK libspdk_event_vmd.so 00:02:20.116 SYMLINK libspdk_event_iobuf.so 00:02:20.376 CC module/event/subsystems/accel/accel.o 00:02:20.376 LIB libspdk_event_accel.a 00:02:20.376 SO libspdk_event_accel.so.5.0 00:02:20.638 SYMLINK libspdk_event_accel.so 00:02:20.900 CC module/event/subsystems/bdev/bdev.o 00:02:20.900 LIB libspdk_event_bdev.a 00:02:20.900 SO libspdk_event_bdev.so.5.0 00:02:21.162 SYMLINK libspdk_event_bdev.so 00:02:21.424 CC module/event/subsystems/ublk/ublk.o 00:02:21.424 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:21.424 CC module/event/subsystems/scsi/scsi.o 00:02:21.424 CC module/event/subsystems/nbd/nbd.o 00:02:21.424 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:21.424 LIB libspdk_event_ublk.a 00:02:21.424 LIB libspdk_event_nbd.a 00:02:21.424 LIB libspdk_event_scsi.a 00:02:21.424 SO libspdk_event_ublk.so.2.0 00:02:21.424 SO libspdk_event_nbd.so.5.0 00:02:21.424 SO libspdk_event_scsi.so.5.0 00:02:21.424 LIB libspdk_event_nvmf.a 00:02:21.686 SYMLINK libspdk_event_ublk.so 00:02:21.686 SYMLINK libspdk_event_nbd.so 00:02:21.686 SO libspdk_event_nvmf.so.5.0 00:02:21.686 SYMLINK libspdk_event_scsi.so 00:02:21.686 SYMLINK libspdk_event_nvmf.so 00:02:21.948 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:21.948 CC module/event/subsystems/iscsi/iscsi.o 00:02:21.948 LIB libspdk_event_vhost_scsi.a 00:02:21.948 LIB libspdk_event_iscsi.a 00:02:21.948 SO libspdk_event_vhost_scsi.so.2.0 00:02:21.948 SO libspdk_event_iscsi.so.5.0 00:02:22.209 SYMLINK libspdk_event_vhost_scsi.so 00:02:22.209 SYMLINK libspdk_event_iscsi.so 00:02:22.209 SO libspdk.so.5.0 00:02:22.209 SYMLINK libspdk.so 00:02:22.473 CC app/trace_record/trace_record.o 00:02:22.473 CC app/spdk_nvme_perf/perf.o 00:02:22.473 CXX app/trace/trace.o 00:02:22.473 CC app/spdk_nvme_identify/identify.o 00:02:22.473 CC test/rpc_client/rpc_client_test.o 00:02:22.473 CC app/spdk_top/spdk_top.o 00:02:22.473 CC app/spdk_lspci/spdk_lspci.o 00:02:22.473 TEST_HEADER include/spdk/accel.h 00:02:22.473 CC app/spdk_nvme_discover/discovery_aer.o 00:02:22.473 TEST_HEADER include/spdk/accel_module.h 00:02:22.473 TEST_HEADER include/spdk/barrier.h 00:02:22.473 TEST_HEADER include/spdk/assert.h 00:02:22.473 TEST_HEADER include/spdk/base64.h 00:02:22.473 CC app/nvmf_tgt/nvmf_main.o 00:02:22.473 TEST_HEADER include/spdk/bdev_module.h 00:02:22.473 TEST_HEADER include/spdk/bdev.h 00:02:22.473 TEST_HEADER include/spdk/bdev_zone.h 00:02:22.473 CC app/iscsi_tgt/iscsi_tgt.o 00:02:22.473 TEST_HEADER include/spdk/bit_array.h 00:02:22.473 TEST_HEADER include/spdk/bit_pool.h 00:02:22.743 TEST_HEADER include/spdk/blob_bdev.h 00:02:22.743 TEST_HEADER include/spdk/blobfs.h 00:02:22.743 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:22.743 CC app/spdk_dd/spdk_dd.o 00:02:22.743 TEST_HEADER include/spdk/blob.h 00:02:22.743 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:22.743 TEST_HEADER include/spdk/conf.h 00:02:22.743 TEST_HEADER include/spdk/cpuset.h 00:02:22.743 TEST_HEADER include/spdk/config.h 00:02:22.743 TEST_HEADER include/spdk/crc16.h 00:02:22.743 TEST_HEADER include/spdk/crc32.h 00:02:22.743 CC app/vhost/vhost.o 00:02:22.743 TEST_HEADER include/spdk/crc64.h 00:02:22.743 TEST_HEADER include/spdk/dif.h 00:02:22.743 TEST_HEADER include/spdk/dma.h 00:02:22.743 TEST_HEADER include/spdk/endian.h 00:02:22.743 TEST_HEADER include/spdk/env_dpdk.h 00:02:22.743 TEST_HEADER include/spdk/env.h 00:02:22.743 TEST_HEADER include/spdk/event.h 00:02:22.743 CC app/spdk_tgt/spdk_tgt.o 00:02:22.743 TEST_HEADER include/spdk/fd_group.h 00:02:22.743 TEST_HEADER include/spdk/fd.h 00:02:22.743 TEST_HEADER include/spdk/file.h 00:02:22.743 TEST_HEADER include/spdk/ftl.h 00:02:22.743 TEST_HEADER include/spdk/gpt_spec.h 00:02:22.743 TEST_HEADER include/spdk/hexlify.h 00:02:22.743 TEST_HEADER include/spdk/histogram_data.h 00:02:22.743 TEST_HEADER include/spdk/idxd.h 00:02:22.743 TEST_HEADER include/spdk/idxd_spec.h 00:02:22.743 TEST_HEADER include/spdk/init.h 00:02:22.743 TEST_HEADER include/spdk/ioat.h 00:02:22.743 TEST_HEADER include/spdk/ioat_spec.h 00:02:22.743 TEST_HEADER include/spdk/iscsi_spec.h 00:02:22.743 TEST_HEADER include/spdk/json.h 00:02:22.743 TEST_HEADER include/spdk/jsonrpc.h 00:02:22.743 TEST_HEADER include/spdk/likely.h 00:02:22.743 TEST_HEADER include/spdk/log.h 00:02:22.743 TEST_HEADER include/spdk/lvol.h 00:02:22.743 TEST_HEADER include/spdk/memory.h 00:02:22.743 TEST_HEADER include/spdk/mmio.h 00:02:22.743 TEST_HEADER include/spdk/nbd.h 00:02:22.743 TEST_HEADER include/spdk/notify.h 00:02:22.743 TEST_HEADER include/spdk/nvme_intel.h 00:02:22.743 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:22.743 TEST_HEADER include/spdk/nvme.h 00:02:22.743 TEST_HEADER include/spdk/nvme_zns.h 00:02:22.743 TEST_HEADER include/spdk/nvme_spec.h 00:02:22.743 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:22.743 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:22.743 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:22.743 TEST_HEADER include/spdk/nvmf.h 00:02:22.743 TEST_HEADER include/spdk/nvmf_spec.h 00:02:22.744 TEST_HEADER include/spdk/opal.h 00:02:22.744 TEST_HEADER include/spdk/opal_spec.h 00:02:22.744 TEST_HEADER include/spdk/nvmf_transport.h 00:02:22.744 TEST_HEADER include/spdk/pipe.h 00:02:22.744 TEST_HEADER include/spdk/pci_ids.h 00:02:22.744 TEST_HEADER include/spdk/reduce.h 00:02:22.744 TEST_HEADER include/spdk/queue.h 00:02:22.744 TEST_HEADER include/spdk/scheduler.h 00:02:22.744 TEST_HEADER include/spdk/rpc.h 00:02:22.744 TEST_HEADER include/spdk/scsi.h 00:02:22.744 TEST_HEADER include/spdk/scsi_spec.h 00:02:22.744 TEST_HEADER include/spdk/sock.h 00:02:22.744 TEST_HEADER include/spdk/stdinc.h 00:02:22.744 TEST_HEADER include/spdk/string.h 00:02:22.744 TEST_HEADER include/spdk/thread.h 00:02:22.744 TEST_HEADER include/spdk/trace.h 00:02:22.744 TEST_HEADER include/spdk/trace_parser.h 00:02:22.744 TEST_HEADER include/spdk/tree.h 00:02:22.744 TEST_HEADER include/spdk/util.h 00:02:22.744 TEST_HEADER include/spdk/uuid.h 00:02:22.744 TEST_HEADER include/spdk/ublk.h 00:02:22.744 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:22.744 TEST_HEADER include/spdk/version.h 00:02:22.744 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:22.744 CC examples/vmd/lsvmd/lsvmd.o 00:02:22.744 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:22.744 TEST_HEADER include/spdk/vhost.h 00:02:22.744 TEST_HEADER include/spdk/vmd.h 00:02:22.744 TEST_HEADER include/spdk/xor.h 00:02:22.744 TEST_HEADER include/spdk/zipf.h 00:02:22.744 CC examples/vmd/led/led.o 00:02:22.744 CXX test/cpp_headers/accel.o 00:02:22.744 CC test/nvme/sgl/sgl.o 00:02:22.744 CC examples/accel/perf/accel_perf.o 00:02:22.744 CXX test/cpp_headers/accel_module.o 00:02:22.744 CC test/event/event_perf/event_perf.o 00:02:22.744 CXX test/cpp_headers/assert.o 00:02:22.744 CC examples/nvme/hotplug/hotplug.o 00:02:22.744 CC examples/nvme/hello_world/hello_world.o 00:02:22.744 CXX test/cpp_headers/barrier.o 00:02:22.744 CXX test/cpp_headers/base64.o 00:02:22.744 CXX test/cpp_headers/bdev.o 00:02:22.744 CXX test/cpp_headers/bdev_module.o 00:02:22.744 CXX test/cpp_headers/bdev_zone.o 00:02:22.744 CXX test/cpp_headers/bit_array.o 00:02:22.744 CC examples/ioat/verify/verify.o 00:02:22.744 CC test/env/memory/memory_ut.o 00:02:22.744 CC examples/nvme/abort/abort.o 00:02:22.744 CC examples/nvme/reconnect/reconnect.o 00:02:22.744 CC test/app/jsoncat/jsoncat.o 00:02:22.744 CC test/event/app_repeat/app_repeat.o 00:02:22.744 CXX test/cpp_headers/bit_pool.o 00:02:22.744 CXX test/cpp_headers/blobfs.o 00:02:22.744 CC test/app/histogram_perf/histogram_perf.o 00:02:22.744 CXX test/cpp_headers/blob_bdev.o 00:02:22.744 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:22.744 CXX test/cpp_headers/blob.o 00:02:22.744 CC test/nvme/startup/startup.o 00:02:22.744 CC examples/sock/hello_world/hello_sock.o 00:02:22.744 CXX test/cpp_headers/blobfs_bdev.o 00:02:22.744 CC examples/blob/cli/blobcli.o 00:02:22.744 CC examples/nvme/arbitration/arbitration.o 00:02:22.744 CXX test/cpp_headers/config.o 00:02:22.744 CXX test/cpp_headers/conf.o 00:02:22.744 CC test/app/stub/stub.o 00:02:22.744 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:22.744 CXX test/cpp_headers/cpuset.o 00:02:22.744 CC examples/util/zipf/zipf.o 00:02:22.744 CXX test/cpp_headers/crc32.o 00:02:22.744 CXX test/cpp_headers/crc16.o 00:02:22.744 CC examples/ioat/perf/perf.o 00:02:22.744 CXX test/cpp_headers/crc64.o 00:02:22.744 CC test/nvme/aer/aer.o 00:02:22.744 CC test/nvme/e2edp/nvme_dp.o 00:02:22.744 CXX test/cpp_headers/dma.o 00:02:22.744 CC test/event/reactor/reactor.o 00:02:22.744 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.744 CC test/nvme/overhead/overhead.o 00:02:22.744 CC test/nvme/connect_stress/connect_stress.o 00:02:22.744 CXX test/cpp_headers/dif.o 00:02:22.744 CC test/nvme/reset/reset.o 00:02:22.744 CC test/env/vtophys/vtophys.o 00:02:22.744 CXX test/cpp_headers/env_dpdk.o 00:02:22.744 CXX test/cpp_headers/endian.o 00:02:22.744 CC test/nvme/reserve/reserve.o 00:02:22.744 CC test/nvme/err_injection/err_injection.o 00:02:22.744 CXX test/cpp_headers/env.o 00:02:22.744 CC test/env/pci/pci_ut.o 00:02:22.744 CXX test/cpp_headers/event.o 00:02:22.744 CC app/fio/nvme/fio_plugin.o 00:02:22.744 CC test/nvme/simple_copy/simple_copy.o 00:02:22.744 CC test/event/reactor_perf/reactor_perf.o 00:02:22.744 CC examples/nvmf/nvmf/nvmf.o 00:02:22.744 CXX test/cpp_headers/fd_group.o 00:02:22.744 CC test/nvme/boot_partition/boot_partition.o 00:02:22.744 CC examples/blob/hello_world/hello_blob.o 00:02:22.744 CXX test/cpp_headers/file.o 00:02:22.744 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:22.744 CXX test/cpp_headers/fd.o 00:02:22.744 CXX test/cpp_headers/ftl.o 00:02:22.744 CXX test/cpp_headers/gpt_spec.o 00:02:22.744 CXX test/cpp_headers/hexlify.o 00:02:22.744 CC test/event/scheduler/scheduler.o 00:02:22.744 CXX test/cpp_headers/histogram_data.o 00:02:22.744 CC test/nvme/compliance/nvme_compliance.o 00:02:22.744 CXX test/cpp_headers/idxd.o 00:02:22.744 CC test/blobfs/mkfs/mkfs.o 00:02:22.744 CXX test/cpp_headers/ioat_spec.o 00:02:22.744 CXX test/cpp_headers/idxd_spec.o 00:02:22.744 CXX test/cpp_headers/init.o 00:02:22.744 CC test/nvme/cuse/cuse.o 00:02:22.744 CXX test/cpp_headers/ioat.o 00:02:22.744 CXX test/cpp_headers/iscsi_spec.o 00:02:22.744 CC test/dma/test_dma/test_dma.o 00:02:22.744 CC test/nvme/fused_ordering/fused_ordering.o 00:02:22.744 CC test/nvme/fdp/fdp.o 00:02:22.744 CC examples/idxd/perf/perf.o 00:02:22.744 CXX test/cpp_headers/jsonrpc.o 00:02:22.744 CXX test/cpp_headers/json.o 00:02:22.744 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:22.744 CC test/app/bdev_svc/bdev_svc.o 00:02:22.744 CXX test/cpp_headers/likely.o 00:02:22.744 CC test/thread/poller_perf/poller_perf.o 00:02:22.744 CXX test/cpp_headers/log.o 00:02:22.744 CXX test/cpp_headers/lvol.o 00:02:22.744 CXX test/cpp_headers/memory.o 00:02:22.744 CXX test/cpp_headers/notify.o 00:02:22.744 CXX test/cpp_headers/mmio.o 00:02:22.744 CXX test/cpp_headers/nbd.o 00:02:22.744 CXX test/cpp_headers/nvme.o 00:02:22.744 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.744 CC test/bdev/bdevio/bdevio.o 00:02:22.744 CXX test/cpp_headers/nvme_intel.o 00:02:22.744 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:22.744 CC test/accel/dif/dif.o 00:02:22.744 CXX test/cpp_headers/nvme_ocssd.o 00:02:22.744 CXX test/cpp_headers/nvme_spec.o 00:02:23.025 CC examples/thread/thread/thread_ex.o 00:02:23.025 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:23.025 CXX test/cpp_headers/nvme_zns.o 00:02:23.025 CXX test/cpp_headers/nvmf_cmd.o 00:02:23.025 CXX test/cpp_headers/nvmf.o 00:02:23.025 CXX test/cpp_headers/nvmf_transport.o 00:02:23.025 CXX test/cpp_headers/nvmf_spec.o 00:02:23.025 CXX test/cpp_headers/opal.o 00:02:23.025 CXX test/cpp_headers/opal_spec.o 00:02:23.025 CC app/fio/bdev/fio_plugin.o 00:02:23.025 CXX test/cpp_headers/pci_ids.o 00:02:23.025 CXX test/cpp_headers/pipe.o 00:02:23.025 CXX test/cpp_headers/queue.o 00:02:23.025 CXX test/cpp_headers/reduce.o 00:02:23.025 CXX test/cpp_headers/rpc.o 00:02:23.025 CXX test/cpp_headers/scheduler.o 00:02:23.025 CXX test/cpp_headers/scsi.o 00:02:23.025 CXX test/cpp_headers/scsi_spec.o 00:02:23.025 LINK spdk_lspci 00:02:23.025 CXX test/cpp_headers/sock.o 00:02:23.293 CC test/env/mem_callbacks/mem_callbacks.o 00:02:23.293 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:23.293 LINK nvmf_tgt 00:02:23.293 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:23.293 CC test/lvol/esnap/esnap.o 00:02:23.293 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:23.293 LINK rpc_client_test 00:02:23.293 LINK iscsi_tgt 00:02:23.293 LINK interrupt_tgt 00:02:23.293 LINK vhost 00:02:23.293 LINK spdk_tgt 00:02:23.563 LINK spdk_nvme_discover 00:02:23.563 LINK lsvmd 00:02:23.563 LINK reactor 00:02:23.563 LINK spdk_trace_record 00:02:23.830 LINK histogram_perf 00:02:23.831 LINK app_repeat 00:02:23.831 LINK led 00:02:23.831 LINK event_perf 00:02:23.831 LINK jsoncat 00:02:23.831 LINK env_dpdk_post_init 00:02:23.831 LINK vtophys 00:02:23.831 LINK boot_partition 00:02:23.831 LINK zipf 00:02:23.831 LINK connect_stress 00:02:23.831 LINK mkfs 00:02:23.831 LINK reactor_perf 00:02:23.831 LINK bdev_svc 00:02:23.831 LINK pmr_persistence 00:02:23.831 LINK startup 00:02:23.831 LINK poller_perf 00:02:24.097 LINK verify 00:02:24.097 LINK ioat_perf 00:02:24.097 LINK stub 00:02:24.097 LINK hello_bdev 00:02:24.097 LINK sgl 00:02:24.097 LINK reserve 00:02:24.097 LINK scheduler 00:02:24.097 LINK err_injection 00:02:24.097 LINK hello_world 00:02:24.097 LINK fused_ordering 00:02:24.097 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.097 CXX test/cpp_headers/stdinc.o 00:02:24.097 LINK simple_copy 00:02:24.097 LINK cmb_copy 00:02:24.097 LINK doorbell_aers 00:02:24.097 CXX test/cpp_headers/string.o 00:02:24.097 CXX test/cpp_headers/thread.o 00:02:24.097 CXX test/cpp_headers/trace.o 00:02:24.097 CXX test/cpp_headers/trace_parser.o 00:02:24.097 LINK hello_blob 00:02:24.097 CXX test/cpp_headers/tree.o 00:02:24.097 CXX test/cpp_headers/ublk.o 00:02:24.097 CXX test/cpp_headers/uuid.o 00:02:24.097 CXX test/cpp_headers/util.o 00:02:24.097 CXX test/cpp_headers/version.o 00:02:24.097 CXX test/cpp_headers/vfio_user_pci.o 00:02:24.097 LINK hello_sock 00:02:24.097 LINK spdk_dd 00:02:24.097 LINK overhead 00:02:24.097 CXX test/cpp_headers/vfio_user_spec.o 00:02:24.097 LINK thread 00:02:24.097 CXX test/cpp_headers/vhost.o 00:02:24.097 CXX test/cpp_headers/vmd.o 00:02:24.097 LINK aer 00:02:24.097 CXX test/cpp_headers/xor.o 00:02:24.097 CXX test/cpp_headers/zipf.o 00:02:24.097 LINK hotplug 00:02:24.097 LINK nvmf 00:02:24.097 LINK reset 00:02:24.097 LINK nvme_dp 00:02:24.358 LINK nvme_compliance 00:02:24.358 LINK fdp 00:02:24.358 LINK abort 00:02:24.358 LINK idxd_perf 00:02:24.358 LINK arbitration 00:02:24.358 LINK reconnect 00:02:24.358 LINK spdk_trace 00:02:24.358 LINK dif 00:02:24.358 LINK accel_perf 00:02:24.358 LINK test_dma 00:02:24.358 LINK bdevio 00:02:24.358 LINK pci_ut 00:02:24.358 LINK nvme_manage 00:02:24.358 LINK nvme_fuzz 00:02:24.358 LINK spdk_bdev 00:02:24.358 LINK blobcli 00:02:24.621 LINK spdk_nvme 00:02:24.621 LINK mem_callbacks 00:02:24.621 LINK vhost_fuzz 00:02:24.621 LINK spdk_nvme_identify 00:02:24.621 LINK spdk_nvme_perf 00:02:24.621 LINK spdk_top 00:02:24.883 LINK bdevperf 00:02:24.883 LINK memory_ut 00:02:24.883 LINK cuse 00:02:25.457 LINK iscsi_fuzz 00:02:28.005 LINK esnap 00:02:28.005 00:02:28.005 real 0m48.012s 00:02:28.005 user 6m42.332s 00:02:28.005 sys 5m32.236s 00:02:28.005 12:30:00 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:28.005 12:30:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.005 ************************************ 00:02:28.005 END TEST make 00:02:28.005 ************************************ 00:02:28.267 12:30:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:28.267 12:30:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:28.267 12:30:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:28.267 12:30:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:28.267 12:30:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:28.267 12:30:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:28.267 12:30:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:28.267 12:30:01 -- scripts/common.sh@335 -- # IFS=.-: 00:02:28.267 12:30:01 -- scripts/common.sh@335 -- # read -ra ver1 00:02:28.267 12:30:01 -- scripts/common.sh@336 -- # IFS=.-: 00:02:28.267 12:30:01 -- scripts/common.sh@336 -- # read -ra ver2 00:02:28.267 12:30:01 -- scripts/common.sh@337 -- # local 'op=<' 00:02:28.267 12:30:01 -- scripts/common.sh@339 -- # ver1_l=2 00:02:28.267 12:30:01 -- scripts/common.sh@340 -- # ver2_l=1 00:02:28.267 12:30:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:28.267 12:30:01 -- scripts/common.sh@343 -- # case "$op" in 00:02:28.267 12:30:01 -- scripts/common.sh@344 -- # : 1 00:02:28.267 12:30:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:28.267 12:30:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:28.267 12:30:01 -- scripts/common.sh@364 -- # decimal 1 00:02:28.267 12:30:01 -- scripts/common.sh@352 -- # local d=1 00:02:28.267 12:30:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:28.267 12:30:01 -- scripts/common.sh@354 -- # echo 1 00:02:28.267 12:30:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:28.267 12:30:01 -- scripts/common.sh@365 -- # decimal 2 00:02:28.267 12:30:01 -- scripts/common.sh@352 -- # local d=2 00:02:28.267 12:30:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:28.267 12:30:01 -- scripts/common.sh@354 -- # echo 2 00:02:28.267 12:30:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:28.267 12:30:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:28.267 12:30:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:28.267 12:30:01 -- scripts/common.sh@367 -- # return 0 00:02:28.267 12:30:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:28.267 12:30:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:28.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.267 --rc genhtml_branch_coverage=1 00:02:28.267 --rc genhtml_function_coverage=1 00:02:28.267 --rc genhtml_legend=1 00:02:28.267 --rc geninfo_all_blocks=1 00:02:28.267 --rc geninfo_unexecuted_blocks=1 00:02:28.267 00:02:28.267 ' 00:02:28.267 12:30:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:28.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.267 --rc genhtml_branch_coverage=1 00:02:28.267 --rc genhtml_function_coverage=1 00:02:28.267 --rc genhtml_legend=1 00:02:28.267 --rc geninfo_all_blocks=1 00:02:28.267 --rc geninfo_unexecuted_blocks=1 00:02:28.267 00:02:28.267 ' 00:02:28.267 12:30:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:28.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.267 --rc genhtml_branch_coverage=1 00:02:28.267 --rc genhtml_function_coverage=1 00:02:28.267 --rc genhtml_legend=1 00:02:28.267 --rc geninfo_all_blocks=1 00:02:28.267 --rc geninfo_unexecuted_blocks=1 00:02:28.267 00:02:28.267 ' 00:02:28.267 12:30:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:28.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.267 --rc genhtml_branch_coverage=1 00:02:28.267 --rc genhtml_function_coverage=1 00:02:28.267 --rc genhtml_legend=1 00:02:28.267 --rc geninfo_all_blocks=1 00:02:28.267 --rc geninfo_unexecuted_blocks=1 00:02:28.267 00:02:28.267 ' 00:02:28.267 12:30:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:28.267 12:30:01 -- nvmf/common.sh@7 -- # uname -s 00:02:28.267 12:30:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:28.267 12:30:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:28.267 12:30:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:28.267 12:30:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:28.267 12:30:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:28.267 12:30:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:28.267 12:30:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:28.267 12:30:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:28.267 12:30:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:28.267 12:30:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:28.267 12:30:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:28.267 12:30:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:28.267 12:30:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:28.267 12:30:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:28.267 12:30:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:28.267 12:30:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:28.267 12:30:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:28.267 12:30:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.267 12:30:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.267 12:30:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.267 12:30:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.267 12:30:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.267 12:30:01 -- paths/export.sh@5 -- # export PATH 00:02:28.267 12:30:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.267 12:30:01 -- nvmf/common.sh@46 -- # : 0 00:02:28.268 12:30:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:28.268 12:30:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:28.268 12:30:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:28.268 12:30:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:28.268 12:30:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:28.268 12:30:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:28.268 12:30:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:28.268 12:30:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:28.268 12:30:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:28.268 12:30:01 -- spdk/autotest.sh@32 -- # uname -s 00:02:28.268 12:30:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:28.268 12:30:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:28.268 12:30:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:28.268 12:30:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:28.268 12:30:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:28.268 12:30:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:28.268 12:30:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:28.268 12:30:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:28.268 12:30:01 -- spdk/autotest.sh@48 -- # udevadm_pid=248479 00:02:28.268 12:30:01 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:28.268 12:30:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:28.268 12:30:01 -- spdk/autotest.sh@54 -- # echo 248481 00:02:28.268 12:30:01 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:28.268 12:30:01 -- spdk/autotest.sh@56 -- # echo 248482 00:02:28.268 12:30:01 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:28.268 12:30:01 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:28.268 12:30:01 -- spdk/autotest.sh@60 -- # echo 248483 00:02:28.268 12:30:01 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:28.268 12:30:01 -- spdk/autotest.sh@62 -- # echo 248484 00:02:28.268 12:30:01 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.268 12:30:01 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:28.268 12:30:01 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:28.268 12:30:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:28.268 12:30:01 -- common/autotest_common.sh@10 -- # set +x 00:02:28.268 12:30:01 -- spdk/autotest.sh@70 -- # create_test_list 00:02:28.268 12:30:01 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:28.268 12:30:01 -- common/autotest_common.sh@10 -- # set +x 00:02:28.268 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:28.268 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:28.268 12:30:01 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:28.268 12:30:01 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:28.268 12:30:01 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:28.268 12:30:01 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:28.268 12:30:01 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:28.268 12:30:01 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:28.268 12:30:01 -- common/autotest_common.sh@1450 -- # uname 00:02:28.268 12:30:01 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:28.268 12:30:01 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:28.268 12:30:01 -- common/autotest_common.sh@1470 -- # uname 00:02:28.268 12:30:01 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:28.268 12:30:01 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:28.268 12:30:01 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:28.531 lcov: LCOV version 1.15 00:02:28.531 12:30:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:31.084 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:31.084 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:31.345 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:31.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:31.345 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:31.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:57.932 12:30:27 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:02:57.932 12:30:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:57.932 12:30:27 -- common/autotest_common.sh@10 -- # set +x 00:02:57.932 12:30:27 -- spdk/autotest.sh@89 -- # rm -f 00:02:57.932 12:30:27 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.194 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:58.456 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:58.456 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:58.456 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:58.456 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:58.456 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:58.456 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:58.456 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:58.456 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:58.456 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:58.456 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:58.456 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:58.718 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:58.718 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:58.718 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:58.718 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:58.718 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:58.979 12:30:31 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:02:58.979 12:30:31 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:58.979 12:30:31 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:58.979 12:30:31 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:58.979 12:30:31 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:58.979 12:30:31 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:58.979 12:30:31 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:58.979 12:30:31 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:58.979 12:30:31 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:58.979 12:30:31 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:02:58.979 12:30:31 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:02:58.979 12:30:31 -- spdk/autotest.sh@108 -- # grep -v p 00:02:58.979 12:30:31 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:58.979 12:30:31 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:58.979 12:30:31 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:02:58.979 12:30:31 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:58.979 12:30:31 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:58.979 No valid GPT data, bailing 00:02:58.979 12:30:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:58.979 12:30:32 -- scripts/common.sh@393 -- # pt= 00:02:58.979 12:30:32 -- scripts/common.sh@394 -- # return 1 00:02:58.979 12:30:32 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:58.979 1+0 records in 00:02:58.979 1+0 records out 00:02:58.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450742 s, 233 MB/s 00:02:58.979 12:30:32 -- spdk/autotest.sh@116 -- # sync 00:02:58.979 12:30:32 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:58.979 12:30:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:58.979 12:30:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:07.129 12:30:40 -- spdk/autotest.sh@122 -- # uname -s 00:03:07.129 12:30:40 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:07.129 12:30:40 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:07.129 12:30:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:07.129 12:30:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:07.129 12:30:40 -- common/autotest_common.sh@10 -- # set +x 00:03:07.129 ************************************ 00:03:07.129 START TEST setup.sh 00:03:07.129 ************************************ 00:03:07.129 12:30:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:07.129 * Looking for test storage... 00:03:07.129 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:07.129 12:30:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:07.129 12:30:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:07.129 12:30:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:07.391 12:30:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:07.391 12:30:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:07.391 12:30:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:07.391 12:30:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:07.391 12:30:40 -- scripts/common.sh@335 -- # IFS=.-: 00:03:07.391 12:30:40 -- scripts/common.sh@335 -- # read -ra ver1 00:03:07.391 12:30:40 -- scripts/common.sh@336 -- # IFS=.-: 00:03:07.391 12:30:40 -- scripts/common.sh@336 -- # read -ra ver2 00:03:07.391 12:30:40 -- scripts/common.sh@337 -- # local 'op=<' 00:03:07.391 12:30:40 -- scripts/common.sh@339 -- # ver1_l=2 00:03:07.391 12:30:40 -- scripts/common.sh@340 -- # ver2_l=1 00:03:07.391 12:30:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:07.391 12:30:40 -- scripts/common.sh@343 -- # case "$op" in 00:03:07.391 12:30:40 -- scripts/common.sh@344 -- # : 1 00:03:07.391 12:30:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:07.391 12:30:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:07.391 12:30:40 -- scripts/common.sh@364 -- # decimal 1 00:03:07.391 12:30:40 -- scripts/common.sh@352 -- # local d=1 00:03:07.391 12:30:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:07.391 12:30:40 -- scripts/common.sh@354 -- # echo 1 00:03:07.391 12:30:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:07.391 12:30:40 -- scripts/common.sh@365 -- # decimal 2 00:03:07.391 12:30:40 -- scripts/common.sh@352 -- # local d=2 00:03:07.391 12:30:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:07.391 12:30:40 -- scripts/common.sh@354 -- # echo 2 00:03:07.392 12:30:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:07.392 12:30:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:07.392 12:30:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:07.392 12:30:40 -- scripts/common.sh@367 -- # return 0 00:03:07.392 12:30:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:07.392 12:30:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:07.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.392 --rc genhtml_branch_coverage=1 00:03:07.392 --rc genhtml_function_coverage=1 00:03:07.392 --rc genhtml_legend=1 00:03:07.392 --rc geninfo_all_blocks=1 00:03:07.392 --rc geninfo_unexecuted_blocks=1 00:03:07.392 00:03:07.392 ' 00:03:07.392 12:30:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:07.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.392 --rc genhtml_branch_coverage=1 00:03:07.392 --rc genhtml_function_coverage=1 00:03:07.392 --rc genhtml_legend=1 00:03:07.392 --rc geninfo_all_blocks=1 00:03:07.392 --rc geninfo_unexecuted_blocks=1 00:03:07.392 00:03:07.392 ' 00:03:07.392 12:30:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:07.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.392 --rc genhtml_branch_coverage=1 00:03:07.392 --rc genhtml_function_coverage=1 00:03:07.392 --rc genhtml_legend=1 00:03:07.392 --rc geninfo_all_blocks=1 00:03:07.392 --rc geninfo_unexecuted_blocks=1 00:03:07.392 00:03:07.392 ' 00:03:07.392 12:30:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:07.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.392 --rc genhtml_branch_coverage=1 00:03:07.392 --rc genhtml_function_coverage=1 00:03:07.392 --rc genhtml_legend=1 00:03:07.392 --rc geninfo_all_blocks=1 00:03:07.392 --rc geninfo_unexecuted_blocks=1 00:03:07.392 00:03:07.392 ' 00:03:07.392 12:30:40 -- setup/test-setup.sh@10 -- # uname -s 00:03:07.392 12:30:40 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:07.392 12:30:40 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:07.392 12:30:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:07.392 12:30:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:07.392 12:30:40 -- common/autotest_common.sh@10 -- # set +x 00:03:07.392 ************************************ 00:03:07.392 START TEST acl 00:03:07.392 ************************************ 00:03:07.392 12:30:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:07.392 * Looking for test storage... 00:03:07.392 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:07.392 12:30:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:07.392 12:30:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:07.392 12:30:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:07.654 12:30:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:07.654 12:30:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:07.654 12:30:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:07.654 12:30:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:07.654 12:30:40 -- scripts/common.sh@335 -- # IFS=.-: 00:03:07.654 12:30:40 -- scripts/common.sh@335 -- # read -ra ver1 00:03:07.654 12:30:40 -- scripts/common.sh@336 -- # IFS=.-: 00:03:07.654 12:30:40 -- scripts/common.sh@336 -- # read -ra ver2 00:03:07.654 12:30:40 -- scripts/common.sh@337 -- # local 'op=<' 00:03:07.654 12:30:40 -- scripts/common.sh@339 -- # ver1_l=2 00:03:07.654 12:30:40 -- scripts/common.sh@340 -- # ver2_l=1 00:03:07.654 12:30:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:07.654 12:30:40 -- scripts/common.sh@343 -- # case "$op" in 00:03:07.654 12:30:40 -- scripts/common.sh@344 -- # : 1 00:03:07.654 12:30:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:07.654 12:30:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:07.654 12:30:40 -- scripts/common.sh@364 -- # decimal 1 00:03:07.654 12:30:40 -- scripts/common.sh@352 -- # local d=1 00:03:07.654 12:30:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:07.654 12:30:40 -- scripts/common.sh@354 -- # echo 1 00:03:07.654 12:30:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:07.654 12:30:40 -- scripts/common.sh@365 -- # decimal 2 00:03:07.654 12:30:40 -- scripts/common.sh@352 -- # local d=2 00:03:07.654 12:30:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:07.654 12:30:40 -- scripts/common.sh@354 -- # echo 2 00:03:07.654 12:30:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:07.654 12:30:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:07.654 12:30:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:07.654 12:30:40 -- scripts/common.sh@367 -- # return 0 00:03:07.654 12:30:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:07.654 12:30:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:07.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.654 --rc genhtml_branch_coverage=1 00:03:07.654 --rc genhtml_function_coverage=1 00:03:07.654 --rc genhtml_legend=1 00:03:07.654 --rc geninfo_all_blocks=1 00:03:07.654 --rc geninfo_unexecuted_blocks=1 00:03:07.654 00:03:07.654 ' 00:03:07.654 12:30:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:07.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.655 --rc genhtml_branch_coverage=1 00:03:07.655 --rc genhtml_function_coverage=1 00:03:07.655 --rc genhtml_legend=1 00:03:07.655 --rc geninfo_all_blocks=1 00:03:07.655 --rc geninfo_unexecuted_blocks=1 00:03:07.655 00:03:07.655 ' 00:03:07.655 12:30:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:07.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.655 --rc genhtml_branch_coverage=1 00:03:07.655 --rc genhtml_function_coverage=1 00:03:07.655 --rc genhtml_legend=1 00:03:07.655 --rc geninfo_all_blocks=1 00:03:07.655 --rc geninfo_unexecuted_blocks=1 00:03:07.655 00:03:07.655 ' 00:03:07.655 12:30:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:07.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.655 --rc genhtml_branch_coverage=1 00:03:07.655 --rc genhtml_function_coverage=1 00:03:07.655 --rc genhtml_legend=1 00:03:07.655 --rc geninfo_all_blocks=1 00:03:07.655 --rc geninfo_unexecuted_blocks=1 00:03:07.655 00:03:07.655 ' 00:03:07.655 12:30:40 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:07.655 12:30:40 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:07.655 12:30:40 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:07.655 12:30:40 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:07.655 12:30:40 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:07.655 12:30:40 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:07.655 12:30:40 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:07.655 12:30:40 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.655 12:30:40 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:07.655 12:30:40 -- setup/acl.sh@12 -- # devs=() 00:03:07.655 12:30:40 -- setup/acl.sh@12 -- # declare -a devs 00:03:07.655 12:30:40 -- setup/acl.sh@13 -- # drivers=() 00:03:07.655 12:30:40 -- setup/acl.sh@13 -- # declare -A drivers 00:03:07.655 12:30:40 -- setup/acl.sh@51 -- # setup reset 00:03:07.655 12:30:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.655 12:30:40 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.870 12:30:44 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:11.870 12:30:44 -- setup/acl.sh@16 -- # local dev driver 00:03:11.870 12:30:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.870 12:30:44 -- setup/acl.sh@15 -- # setup output status 00:03:11.870 12:30:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.870 12:30:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:15.185 Hugepages 00:03:15.185 node hugesize free / total 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 00:03:15.185 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:15.185 12:30:48 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.185 12:30:48 -- setup/acl.sh@20 -- # continue 00:03:15.185 12:30:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.185 12:30:48 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:15.185 12:30:48 -- setup/acl.sh@54 -- # run_test denied denied 00:03:15.185 12:30:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.185 12:30:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.185 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:03:15.185 ************************************ 00:03:15.185 START TEST denied 00:03:15.185 ************************************ 00:03:15.185 12:30:48 -- common/autotest_common.sh@1114 -- # denied 00:03:15.185 12:30:48 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:15.185 12:30:48 -- setup/acl.sh@38 -- # setup output config 00:03:15.185 12:30:48 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:15.185 12:30:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.185 12:30:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:19.396 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:19.396 12:30:52 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:19.396 12:30:52 -- setup/acl.sh@28 -- # local dev driver 00:03:19.396 12:30:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:19.396 12:30:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:19.396 12:30:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:19.396 12:30:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:19.396 12:30:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:19.396 12:30:52 -- setup/acl.sh@41 -- # setup reset 00:03:19.396 12:30:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.396 12:30:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.691 00:03:24.691 real 0m9.221s 00:03:24.691 user 0m3.108s 00:03:24.691 sys 0m5.304s 00:03:24.691 12:30:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:24.691 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:03:24.691 ************************************ 00:03:24.691 END TEST denied 00:03:24.691 ************************************ 00:03:24.691 12:30:57 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:24.691 12:30:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.691 12:30:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.691 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:03:24.691 ************************************ 00:03:24.691 START TEST allowed 00:03:24.691 ************************************ 00:03:24.691 12:30:57 -- common/autotest_common.sh@1114 -- # allowed 00:03:24.691 12:30:57 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:24.691 12:30:57 -- setup/acl.sh@45 -- # setup output config 00:03:24.691 12:30:57 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:24.691 12:30:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.691 12:30:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:31.280 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:31.280 12:31:03 -- setup/acl.sh@47 -- # verify 00:03:31.280 12:31:03 -- setup/acl.sh@28 -- # local dev driver 00:03:31.280 12:31:03 -- setup/acl.sh@48 -- # setup reset 00:03:31.280 12:31:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.281 12:31:03 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.588 00:03:34.588 real 0m10.020s 00:03:34.588 user 0m3.046s 00:03:34.588 sys 0m5.283s 00:03:34.588 12:31:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:34.588 12:31:07 -- common/autotest_common.sh@10 -- # set +x 00:03:34.588 ************************************ 00:03:34.588 END TEST allowed 00:03:34.588 ************************************ 00:03:34.588 00:03:34.588 real 0m27.272s 00:03:34.588 user 0m9.187s 00:03:34.588 sys 0m15.746s 00:03:34.588 12:31:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:34.588 12:31:07 -- common/autotest_common.sh@10 -- # set +x 00:03:34.588 ************************************ 00:03:34.588 END TEST acl 00:03:34.588 ************************************ 00:03:34.589 12:31:07 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:34.589 12:31:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.589 12:31:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.589 12:31:07 -- common/autotest_common.sh@10 -- # set +x 00:03:34.589 ************************************ 00:03:34.589 START TEST hugepages 00:03:34.589 ************************************ 00:03:34.589 12:31:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:34.850 * Looking for test storage... 00:03:34.850 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:34.850 12:31:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:34.850 12:31:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:34.850 12:31:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:34.850 12:31:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:34.850 12:31:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:34.850 12:31:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:34.850 12:31:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:34.850 12:31:07 -- scripts/common.sh@335 -- # IFS=.-: 00:03:34.850 12:31:07 -- scripts/common.sh@335 -- # read -ra ver1 00:03:34.850 12:31:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.850 12:31:07 -- scripts/common.sh@336 -- # read -ra ver2 00:03:34.850 12:31:07 -- scripts/common.sh@337 -- # local 'op=<' 00:03:34.850 12:31:07 -- scripts/common.sh@339 -- # ver1_l=2 00:03:34.850 12:31:07 -- scripts/common.sh@340 -- # ver2_l=1 00:03:34.850 12:31:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:34.850 12:31:07 -- scripts/common.sh@343 -- # case "$op" in 00:03:34.850 12:31:07 -- scripts/common.sh@344 -- # : 1 00:03:34.850 12:31:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:34.850 12:31:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.850 12:31:07 -- scripts/common.sh@364 -- # decimal 1 00:03:34.850 12:31:07 -- scripts/common.sh@352 -- # local d=1 00:03:34.850 12:31:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.850 12:31:07 -- scripts/common.sh@354 -- # echo 1 00:03:34.850 12:31:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:34.850 12:31:07 -- scripts/common.sh@365 -- # decimal 2 00:03:34.850 12:31:07 -- scripts/common.sh@352 -- # local d=2 00:03:34.850 12:31:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.850 12:31:07 -- scripts/common.sh@354 -- # echo 2 00:03:34.850 12:31:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:34.850 12:31:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:34.850 12:31:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:34.850 12:31:07 -- scripts/common.sh@367 -- # return 0 00:03:34.850 12:31:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.850 12:31:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:34.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.850 --rc genhtml_branch_coverage=1 00:03:34.850 --rc genhtml_function_coverage=1 00:03:34.850 --rc genhtml_legend=1 00:03:34.850 --rc geninfo_all_blocks=1 00:03:34.850 --rc geninfo_unexecuted_blocks=1 00:03:34.850 00:03:34.851 ' 00:03:34.851 12:31:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:34.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.851 --rc genhtml_branch_coverage=1 00:03:34.851 --rc genhtml_function_coverage=1 00:03:34.851 --rc genhtml_legend=1 00:03:34.851 --rc geninfo_all_blocks=1 00:03:34.851 --rc geninfo_unexecuted_blocks=1 00:03:34.851 00:03:34.851 ' 00:03:34.851 12:31:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:34.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.851 --rc genhtml_branch_coverage=1 00:03:34.851 --rc genhtml_function_coverage=1 00:03:34.851 --rc genhtml_legend=1 00:03:34.851 --rc geninfo_all_blocks=1 00:03:34.851 --rc geninfo_unexecuted_blocks=1 00:03:34.851 00:03:34.851 ' 00:03:34.851 12:31:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:34.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.851 --rc genhtml_branch_coverage=1 00:03:34.851 --rc genhtml_function_coverage=1 00:03:34.851 --rc genhtml_legend=1 00:03:34.851 --rc geninfo_all_blocks=1 00:03:34.851 --rc geninfo_unexecuted_blocks=1 00:03:34.851 00:03:34.851 ' 00:03:34.851 12:31:07 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:34.851 12:31:07 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:34.851 12:31:07 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:34.851 12:31:07 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:34.851 12:31:07 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:34.851 12:31:07 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:34.851 12:31:07 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:34.851 12:31:07 -- setup/common.sh@18 -- # local node= 00:03:34.851 12:31:07 -- setup/common.sh@19 -- # local var val 00:03:34.851 12:31:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.851 12:31:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.851 12:31:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.851 12:31:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.851 12:31:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.851 12:31:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 107133748 kB' 'MemAvailable: 110564300 kB' 'Buffers: 9536 kB' 'Cached: 9548356 kB' 'SwapCached: 0 kB' 'Active: 7135508 kB' 'Inactive: 3687920 kB' 'Active(anon): 6689808 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1269008 kB' 'Mapped: 146856 kB' 'Shmem: 5424272 kB' 'KReclaimable: 242384 kB' 'Slab: 1117388 kB' 'SReclaimable: 242384 kB' 'SUnreclaim: 875004 kB' 'KernelStack: 26992 kB' 'PageTables: 9428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69453824 kB' 'Committed_AS: 9029860 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232656 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.851 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.851 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # continue 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.852 12:31:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.852 12:31:07 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:34.852 12:31:07 -- setup/common.sh@33 -- # echo 2048 00:03:34.852 12:31:07 -- setup/common.sh@33 -- # return 0 00:03:34.852 12:31:07 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:34.852 12:31:07 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:34.852 12:31:07 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:34.852 12:31:07 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:34.852 12:31:07 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:34.852 12:31:07 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:34.852 12:31:07 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:34.852 12:31:07 -- setup/hugepages.sh@207 -- # get_nodes 00:03:34.852 12:31:07 -- setup/hugepages.sh@27 -- # local node 00:03:34.852 12:31:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.852 12:31:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:34.852 12:31:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.852 12:31:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:34.852 12:31:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.852 12:31:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.852 12:31:07 -- setup/hugepages.sh@208 -- # clear_hp 00:03:34.852 12:31:07 -- setup/hugepages.sh@37 -- # local node hp 00:03:34.852 12:31:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.852 12:31:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.852 12:31:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:34.852 12:31:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.852 12:31:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:34.852 12:31:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.852 12:31:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.852 12:31:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:34.852 12:31:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.852 12:31:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:34.852 12:31:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:34.852 12:31:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:34.852 12:31:07 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:34.852 12:31:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.852 12:31:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.852 12:31:07 -- common/autotest_common.sh@10 -- # set +x 00:03:34.852 ************************************ 00:03:34.852 START TEST default_setup 00:03:34.852 ************************************ 00:03:34.852 12:31:07 -- common/autotest_common.sh@1114 -- # default_setup 00:03:34.852 12:31:07 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:34.852 12:31:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.852 12:31:07 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:34.852 12:31:07 -- setup/hugepages.sh@51 -- # shift 00:03:34.852 12:31:07 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:34.852 12:31:07 -- setup/hugepages.sh@52 -- # local node_ids 00:03:34.852 12:31:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.852 12:31:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.852 12:31:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:34.852 12:31:07 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:34.852 12:31:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.852 12:31:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.852 12:31:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.852 12:31:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.852 12:31:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.852 12:31:07 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:34.852 12:31:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.852 12:31:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:34.852 12:31:07 -- setup/hugepages.sh@73 -- # return 0 00:03:34.852 12:31:07 -- setup/hugepages.sh@137 -- # setup output 00:03:34.852 12:31:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.852 12:31:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:39.076 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:39.076 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:39.076 12:31:12 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:39.076 12:31:12 -- setup/hugepages.sh@89 -- # local node 00:03:39.076 12:31:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.076 12:31:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.076 12:31:12 -- setup/hugepages.sh@92 -- # local surp 00:03:39.076 12:31:12 -- setup/hugepages.sh@93 -- # local resv 00:03:39.076 12:31:12 -- setup/hugepages.sh@94 -- # local anon 00:03:39.076 12:31:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.076 12:31:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.076 12:31:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.076 12:31:12 -- setup/common.sh@18 -- # local node= 00:03:39.076 12:31:12 -- setup/common.sh@19 -- # local var val 00:03:39.076 12:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.076 12:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.076 12:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.076 12:31:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.076 12:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.076 12:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.076 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.076 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109328128 kB' 'MemAvailable: 112758168 kB' 'Buffers: 9536 kB' 'Cached: 9548488 kB' 'SwapCached: 0 kB' 'Active: 7139228 kB' 'Inactive: 3687920 kB' 'Active(anon): 6693528 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1272536 kB' 'Mapped: 146992 kB' 'Shmem: 5424404 kB' 'KReclaimable: 241360 kB' 'Slab: 1114768 kB' 'SReclaimable: 241360 kB' 'SUnreclaim: 873408 kB' 'KernelStack: 27040 kB' 'PageTables: 9564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9036320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232896 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.077 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.077 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.078 12:31:12 -- setup/common.sh@33 -- # echo 0 00:03:39.078 12:31:12 -- setup/common.sh@33 -- # return 0 00:03:39.078 12:31:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:39.078 12:31:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.078 12:31:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.078 12:31:12 -- setup/common.sh@18 -- # local node= 00:03:39.078 12:31:12 -- setup/common.sh@19 -- # local var val 00:03:39.078 12:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.078 12:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.078 12:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.078 12:31:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.078 12:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.078 12:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109326208 kB' 'MemAvailable: 112756248 kB' 'Buffers: 9536 kB' 'Cached: 9548492 kB' 'SwapCached: 0 kB' 'Active: 7139708 kB' 'Inactive: 3687920 kB' 'Active(anon): 6694008 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1273048 kB' 'Mapped: 146992 kB' 'Shmem: 5424408 kB' 'KReclaimable: 241360 kB' 'Slab: 1114696 kB' 'SReclaimable: 241360 kB' 'SUnreclaim: 873336 kB' 'KernelStack: 27104 kB' 'PageTables: 9528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9036332 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232864 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.078 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.078 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.079 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.079 12:31:12 -- setup/common.sh@33 -- # echo 0 00:03:39.079 12:31:12 -- setup/common.sh@33 -- # return 0 00:03:39.079 12:31:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:39.079 12:31:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.079 12:31:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.079 12:31:12 -- setup/common.sh@18 -- # local node= 00:03:39.079 12:31:12 -- setup/common.sh@19 -- # local var val 00:03:39.079 12:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.079 12:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.079 12:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.079 12:31:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.079 12:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.079 12:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.079 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109324456 kB' 'MemAvailable: 112754496 kB' 'Buffers: 9536 kB' 'Cached: 9548496 kB' 'SwapCached: 0 kB' 'Active: 7139100 kB' 'Inactive: 3687920 kB' 'Active(anon): 6693400 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1272396 kB' 'Mapped: 146960 kB' 'Shmem: 5424412 kB' 'KReclaimable: 241360 kB' 'Slab: 1114696 kB' 'SReclaimable: 241360 kB' 'SUnreclaim: 873336 kB' 'KernelStack: 27152 kB' 'PageTables: 9452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9036484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232880 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.080 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.080 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.081 12:31:12 -- setup/common.sh@33 -- # echo 0 00:03:39.081 12:31:12 -- setup/common.sh@33 -- # return 0 00:03:39.081 12:31:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:39.081 12:31:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.081 nr_hugepages=1024 00:03:39.081 12:31:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.081 resv_hugepages=0 00:03:39.081 12:31:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.081 surplus_hugepages=0 00:03:39.081 12:31:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.081 anon_hugepages=0 00:03:39.081 12:31:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.081 12:31:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.081 12:31:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.081 12:31:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.081 12:31:12 -- setup/common.sh@18 -- # local node= 00:03:39.081 12:31:12 -- setup/common.sh@19 -- # local var val 00:03:39.081 12:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.081 12:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.081 12:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.081 12:31:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.081 12:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.081 12:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109323284 kB' 'MemAvailable: 112753324 kB' 'Buffers: 9536 kB' 'Cached: 9548496 kB' 'SwapCached: 0 kB' 'Active: 7138296 kB' 'Inactive: 3687920 kB' 'Active(anon): 6692596 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1271540 kB' 'Mapped: 146960 kB' 'Shmem: 5424412 kB' 'KReclaimable: 241360 kB' 'Slab: 1114752 kB' 'SReclaimable: 241360 kB' 'SUnreclaim: 873392 kB' 'KernelStack: 27088 kB' 'PageTables: 9416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9036496 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232864 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.081 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.081 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.082 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.082 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.083 12:31:12 -- setup/common.sh@33 -- # echo 1024 00:03:39.083 12:31:12 -- setup/common.sh@33 -- # return 0 00:03:39.083 12:31:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.083 12:31:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.083 12:31:12 -- setup/hugepages.sh@27 -- # local node 00:03:39.083 12:31:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.083 12:31:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:39.083 12:31:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.083 12:31:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:39.083 12:31:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.083 12:31:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.083 12:31:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.083 12:31:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.083 12:31:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.083 12:31:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.083 12:31:12 -- setup/common.sh@18 -- # local node=0 00:03:39.083 12:31:12 -- setup/common.sh@19 -- # local var val 00:03:39.083 12:31:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.083 12:31:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.083 12:31:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.083 12:31:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.083 12:31:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.083 12:31:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652968 kB' 'MemFree: 53276112 kB' 'MemUsed: 12376856 kB' 'SwapCached: 0 kB' 'Active: 5200812 kB' 'Inactive: 3583492 kB' 'Active(anon): 4954960 kB' 'Inactive(anon): 0 kB' 'Active(file): 245852 kB' 'Inactive(file): 3583492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7891784 kB' 'Mapped: 54252 kB' 'AnonPages: 895232 kB' 'Shmem: 4062440 kB' 'KernelStack: 13912 kB' 'PageTables: 4656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121336 kB' 'Slab: 534724 kB' 'SReclaimable: 121336 kB' 'SUnreclaim: 413388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.083 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.083 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # continue 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.084 12:31:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.084 12:31:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.084 12:31:12 -- setup/common.sh@33 -- # echo 0 00:03:39.084 12:31:12 -- setup/common.sh@33 -- # return 0 00:03:39.084 12:31:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.084 12:31:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.084 12:31:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.084 12:31:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.084 12:31:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:39.084 node0=1024 expecting 1024 00:03:39.084 12:31:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:39.084 00:03:39.084 real 0m4.252s 00:03:39.084 user 0m1.620s 00:03:39.084 sys 0m2.621s 00:03:39.084 12:31:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:39.084 12:31:12 -- common/autotest_common.sh@10 -- # set +x 00:03:39.084 ************************************ 00:03:39.084 END TEST default_setup 00:03:39.084 ************************************ 00:03:39.346 12:31:12 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:39.346 12:31:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:39.346 12:31:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:39.346 12:31:12 -- common/autotest_common.sh@10 -- # set +x 00:03:39.346 ************************************ 00:03:39.346 START TEST per_node_1G_alloc 00:03:39.346 ************************************ 00:03:39.346 12:31:12 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:39.346 12:31:12 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:39.346 12:31:12 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:39.346 12:31:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:39.346 12:31:12 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:39.346 12:31:12 -- setup/hugepages.sh@51 -- # shift 00:03:39.346 12:31:12 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:39.346 12:31:12 -- setup/hugepages.sh@52 -- # local node_ids 00:03:39.346 12:31:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.346 12:31:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:39.346 12:31:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:39.346 12:31:12 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:39.346 12:31:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.346 12:31:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:39.346 12:31:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.346 12:31:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.346 12:31:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.346 12:31:12 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:39.346 12:31:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.346 12:31:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:39.346 12:31:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.346 12:31:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:39.346 12:31:12 -- setup/hugepages.sh@73 -- # return 0 00:03:39.346 12:31:12 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:39.346 12:31:12 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:39.346 12:31:12 -- setup/hugepages.sh@146 -- # setup output 00:03:39.346 12:31:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.346 12:31:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:42.652 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:42.652 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.652 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:43.231 12:31:16 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:43.231 12:31:16 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:43.231 12:31:16 -- setup/hugepages.sh@89 -- # local node 00:03:43.231 12:31:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.231 12:31:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.231 12:31:16 -- setup/hugepages.sh@92 -- # local surp 00:03:43.231 12:31:16 -- setup/hugepages.sh@93 -- # local resv 00:03:43.231 12:31:16 -- setup/hugepages.sh@94 -- # local anon 00:03:43.231 12:31:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.231 12:31:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.231 12:31:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.231 12:31:16 -- setup/common.sh@18 -- # local node= 00:03:43.231 12:31:16 -- setup/common.sh@19 -- # local var val 00:03:43.231 12:31:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.231 12:31:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.231 12:31:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.231 12:31:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.231 12:31:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.231 12:31:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.231 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.231 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109333216 kB' 'MemAvailable: 112763256 kB' 'Buffers: 9536 kB' 'Cached: 9548636 kB' 'SwapCached: 0 kB' 'Active: 7141604 kB' 'Inactive: 3687920 kB' 'Active(anon): 6695904 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1274640 kB' 'Mapped: 145888 kB' 'Shmem: 5424552 kB' 'KReclaimable: 241360 kB' 'Slab: 1114920 kB' 'SReclaimable: 241360 kB' 'SUnreclaim: 873560 kB' 'KernelStack: 26960 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9022392 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232976 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.232 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.232 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.233 12:31:16 -- setup/common.sh@33 -- # echo 0 00:03:43.233 12:31:16 -- setup/common.sh@33 -- # return 0 00:03:43.233 12:31:16 -- setup/hugepages.sh@97 -- # anon=0 00:03:43.233 12:31:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.233 12:31:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.233 12:31:16 -- setup/common.sh@18 -- # local node= 00:03:43.233 12:31:16 -- setup/common.sh@19 -- # local var val 00:03:43.233 12:31:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.233 12:31:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.233 12:31:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.233 12:31:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.233 12:31:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.233 12:31:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109334240 kB' 'MemAvailable: 112764280 kB' 'Buffers: 9536 kB' 'Cached: 9548636 kB' 'SwapCached: 0 kB' 'Active: 7141940 kB' 'Inactive: 3687920 kB' 'Active(anon): 6696240 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1274984 kB' 'Mapped: 145888 kB' 'Shmem: 5424552 kB' 'KReclaimable: 241360 kB' 'Slab: 1114888 kB' 'SReclaimable: 241360 kB' 'SUnreclaim: 873528 kB' 'KernelStack: 26928 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9022404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232944 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.233 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.233 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.234 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.234 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.234 12:31:16 -- setup/common.sh@33 -- # echo 0 00:03:43.234 12:31:16 -- setup/common.sh@33 -- # return 0 00:03:43.234 12:31:16 -- setup/hugepages.sh@99 -- # surp=0 00:03:43.234 12:31:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.234 12:31:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.234 12:31:16 -- setup/common.sh@18 -- # local node= 00:03:43.234 12:31:16 -- setup/common.sh@19 -- # local var val 00:03:43.235 12:31:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.235 12:31:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.235 12:31:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.235 12:31:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.235 12:31:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.235 12:31:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109334512 kB' 'MemAvailable: 112764552 kB' 'Buffers: 9536 kB' 'Cached: 9548636 kB' 'SwapCached: 0 kB' 'Active: 7141260 kB' 'Inactive: 3687920 kB' 'Active(anon): 6695560 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1274288 kB' 'Mapped: 145848 kB' 'Shmem: 5424552 kB' 'KReclaimable: 241360 kB' 'Slab: 1114980 kB' 'SReclaimable: 241360 kB' 'SUnreclaim: 873620 kB' 'KernelStack: 26928 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9022416 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232944 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.235 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.235 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.236 12:31:16 -- setup/common.sh@33 -- # echo 0 00:03:43.236 12:31:16 -- setup/common.sh@33 -- # return 0 00:03:43.236 12:31:16 -- setup/hugepages.sh@100 -- # resv=0 00:03:43.236 12:31:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.236 nr_hugepages=1024 00:03:43.236 12:31:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.236 resv_hugepages=0 00:03:43.236 12:31:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.236 surplus_hugepages=0 00:03:43.236 12:31:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.236 anon_hugepages=0 00:03:43.236 12:31:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.236 12:31:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.236 12:31:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.236 12:31:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.236 12:31:16 -- setup/common.sh@18 -- # local node= 00:03:43.236 12:31:16 -- setup/common.sh@19 -- # local var val 00:03:43.236 12:31:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.236 12:31:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.236 12:31:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.236 12:31:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.236 12:31:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.236 12:31:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109334260 kB' 'MemAvailable: 112764300 kB' 'Buffers: 9536 kB' 'Cached: 9548676 kB' 'SwapCached: 0 kB' 'Active: 7140880 kB' 'Inactive: 3687920 kB' 'Active(anon): 6695180 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1273900 kB' 'Mapped: 145848 kB' 'Shmem: 5424592 kB' 'KReclaimable: 241360 kB' 'Slab: 1114980 kB' 'SReclaimable: 241360 kB' 'SUnreclaim: 873620 kB' 'KernelStack: 26928 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9022432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232960 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.236 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.236 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.237 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.237 12:31:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.238 12:31:16 -- setup/common.sh@33 -- # echo 1024 00:03:43.238 12:31:16 -- setup/common.sh@33 -- # return 0 00:03:43.238 12:31:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.238 12:31:16 -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.238 12:31:16 -- setup/hugepages.sh@27 -- # local node 00:03:43.238 12:31:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.238 12:31:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.238 12:31:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.238 12:31:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.238 12:31:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.238 12:31:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.238 12:31:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.238 12:31:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.238 12:31:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.238 12:31:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.238 12:31:16 -- setup/common.sh@18 -- # local node=0 00:03:43.238 12:31:16 -- setup/common.sh@19 -- # local var val 00:03:43.238 12:31:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.238 12:31:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.238 12:31:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.238 12:31:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.238 12:31:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.238 12:31:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652968 kB' 'MemFree: 54328804 kB' 'MemUsed: 11324164 kB' 'SwapCached: 0 kB' 'Active: 5201220 kB' 'Inactive: 3583492 kB' 'Active(anon): 4955368 kB' 'Inactive(anon): 0 kB' 'Active(file): 245852 kB' 'Inactive(file): 3583492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7891884 kB' 'Mapped: 54064 kB' 'AnonPages: 896028 kB' 'Shmem: 4062540 kB' 'KernelStack: 13992 kB' 'PageTables: 4808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121336 kB' 'Slab: 534780 kB' 'SReclaimable: 121336 kB' 'SUnreclaim: 413444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.238 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@33 -- # echo 0 00:03:43.239 12:31:16 -- setup/common.sh@33 -- # return 0 00:03:43.239 12:31:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.239 12:31:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.239 12:31:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.239 12:31:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:43.239 12:31:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.239 12:31:16 -- setup/common.sh@18 -- # local node=1 00:03:43.239 12:31:16 -- setup/common.sh@19 -- # local var val 00:03:43.239 12:31:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.239 12:31:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.239 12:31:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:43.239 12:31:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:43.239 12:31:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.239 12:31:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671780 kB' 'MemFree: 55005952 kB' 'MemUsed: 5665828 kB' 'SwapCached: 0 kB' 'Active: 1940468 kB' 'Inactive: 104428 kB' 'Active(anon): 1740620 kB' 'Inactive(anon): 0 kB' 'Active(file): 199848 kB' 'Inactive(file): 104428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1666344 kB' 'Mapped: 91784 kB' 'AnonPages: 378656 kB' 'Shmem: 1362068 kB' 'KernelStack: 12968 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120024 kB' 'Slab: 580200 kB' 'SReclaimable: 120024 kB' 'SUnreclaim: 460176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.239 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.239 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # continue 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.240 12:31:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.240 12:31:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.240 12:31:16 -- setup/common.sh@33 -- # echo 0 00:03:43.240 12:31:16 -- setup/common.sh@33 -- # return 0 00:03:43.240 12:31:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.240 12:31:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.240 12:31:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.240 12:31:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.240 12:31:16 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:43.240 node0=512 expecting 512 00:03:43.240 12:31:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.240 12:31:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.240 12:31:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.240 12:31:16 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:43.240 node1=512 expecting 512 00:03:43.240 12:31:16 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:43.240 00:03:43.240 real 0m4.052s 00:03:43.240 user 0m1.593s 00:03:43.240 sys 0m2.518s 00:03:43.240 12:31:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:43.240 12:31:16 -- common/autotest_common.sh@10 -- # set +x 00:03:43.240 ************************************ 00:03:43.241 END TEST per_node_1G_alloc 00:03:43.241 ************************************ 00:03:43.241 12:31:16 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:43.241 12:31:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.241 12:31:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.241 12:31:16 -- common/autotest_common.sh@10 -- # set +x 00:03:43.241 ************************************ 00:03:43.241 START TEST even_2G_alloc 00:03:43.241 ************************************ 00:03:43.241 12:31:16 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:43.241 12:31:16 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:43.241 12:31:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.241 12:31:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.241 12:31:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.241 12:31:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.241 12:31:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.241 12:31:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.241 12:31:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.241 12:31:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.241 12:31:16 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.241 12:31:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.241 12:31:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.241 12:31:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.241 12:31:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.241 12:31:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.241 12:31:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:43.241 12:31:16 -- setup/hugepages.sh@83 -- # : 512 00:03:43.241 12:31:16 -- setup/hugepages.sh@84 -- # : 1 00:03:43.241 12:31:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.241 12:31:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:43.241 12:31:16 -- setup/hugepages.sh@83 -- # : 0 00:03:43.241 12:31:16 -- setup/hugepages.sh@84 -- # : 0 00:03:43.241 12:31:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.241 12:31:16 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:43.241 12:31:16 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:43.241 12:31:16 -- setup/hugepages.sh@153 -- # setup output 00:03:43.241 12:31:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.241 12:31:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:47.462 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:47.462 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.462 12:31:20 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:47.462 12:31:20 -- setup/hugepages.sh@89 -- # local node 00:03:47.462 12:31:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.462 12:31:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.462 12:31:20 -- setup/hugepages.sh@92 -- # local surp 00:03:47.462 12:31:20 -- setup/hugepages.sh@93 -- # local resv 00:03:47.462 12:31:20 -- setup/hugepages.sh@94 -- # local anon 00:03:47.462 12:31:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.462 12:31:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.462 12:31:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.462 12:31:20 -- setup/common.sh@18 -- # local node= 00:03:47.462 12:31:20 -- setup/common.sh@19 -- # local var val 00:03:47.462 12:31:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.462 12:31:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.462 12:31:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.462 12:31:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.462 12:31:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.462 12:31:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109347308 kB' 'MemAvailable: 112777444 kB' 'Buffers: 9536 kB' 'Cached: 9548784 kB' 'SwapCached: 0 kB' 'Active: 7148264 kB' 'Inactive: 3687920 kB' 'Active(anon): 6702564 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1281544 kB' 'Mapped: 146384 kB' 'Shmem: 5424700 kB' 'KReclaimable: 241552 kB' 'Slab: 1115196 kB' 'SReclaimable: 241552 kB' 'SUnreclaim: 873644 kB' 'KernelStack: 27008 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9024280 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232880 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.462 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.462 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.463 12:31:20 -- setup/common.sh@33 -- # echo 0 00:03:47.463 12:31:20 -- setup/common.sh@33 -- # return 0 00:03:47.463 12:31:20 -- setup/hugepages.sh@97 -- # anon=0 00:03:47.463 12:31:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.463 12:31:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.463 12:31:20 -- setup/common.sh@18 -- # local node= 00:03:47.463 12:31:20 -- setup/common.sh@19 -- # local var val 00:03:47.463 12:31:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.463 12:31:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.463 12:31:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.463 12:31:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.463 12:31:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.463 12:31:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109345584 kB' 'MemAvailable: 112775708 kB' 'Buffers: 9536 kB' 'Cached: 9548784 kB' 'SwapCached: 0 kB' 'Active: 7151328 kB' 'Inactive: 3687920 kB' 'Active(anon): 6705628 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1284656 kB' 'Mapped: 146384 kB' 'Shmem: 5424700 kB' 'KReclaimable: 241528 kB' 'Slab: 1115244 kB' 'SReclaimable: 241528 kB' 'SUnreclaim: 873716 kB' 'KernelStack: 26960 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9027460 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232832 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.463 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.464 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.464 12:31:20 -- setup/common.sh@33 -- # echo 0 00:03:47.464 12:31:20 -- setup/common.sh@33 -- # return 0 00:03:47.464 12:31:20 -- setup/hugepages.sh@99 -- # surp=0 00:03:47.464 12:31:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.464 12:31:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.464 12:31:20 -- setup/common.sh@18 -- # local node= 00:03:47.464 12:31:20 -- setup/common.sh@19 -- # local var val 00:03:47.464 12:31:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.464 12:31:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.464 12:31:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.464 12:31:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.464 12:31:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.464 12:31:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.464 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109343648 kB' 'MemAvailable: 112773756 kB' 'Buffers: 9536 kB' 'Cached: 9548796 kB' 'SwapCached: 0 kB' 'Active: 7152756 kB' 'Inactive: 3687920 kB' 'Active(anon): 6707056 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1286088 kB' 'Mapped: 146660 kB' 'Shmem: 5424712 kB' 'KReclaimable: 241496 kB' 'Slab: 1115296 kB' 'SReclaimable: 241496 kB' 'SUnreclaim: 873800 kB' 'KernelStack: 26976 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9029336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232836 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.465 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.465 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.466 12:31:20 -- setup/common.sh@33 -- # echo 0 00:03:47.466 12:31:20 -- setup/common.sh@33 -- # return 0 00:03:47.466 12:31:20 -- setup/hugepages.sh@100 -- # resv=0 00:03:47.466 12:31:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.466 nr_hugepages=1024 00:03:47.466 12:31:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.466 resv_hugepages=0 00:03:47.466 12:31:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.466 surplus_hugepages=0 00:03:47.466 12:31:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.466 anon_hugepages=0 00:03:47.466 12:31:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.466 12:31:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.466 12:31:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.466 12:31:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.466 12:31:20 -- setup/common.sh@18 -- # local node= 00:03:47.466 12:31:20 -- setup/common.sh@19 -- # local var val 00:03:47.466 12:31:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.466 12:31:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.466 12:31:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.466 12:31:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.466 12:31:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.466 12:31:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109344632 kB' 'MemAvailable: 112774740 kB' 'Buffers: 9536 kB' 'Cached: 9548812 kB' 'SwapCached: 0 kB' 'Active: 7147488 kB' 'Inactive: 3687920 kB' 'Active(anon): 6701788 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1280780 kB' 'Mapped: 146220 kB' 'Shmem: 5424728 kB' 'KReclaimable: 241496 kB' 'Slab: 1115296 kB' 'SReclaimable: 241496 kB' 'SUnreclaim: 873800 kB' 'KernelStack: 27008 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9027912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232832 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.466 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.466 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.467 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.467 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.467 12:31:20 -- setup/common.sh@33 -- # echo 1024 00:03:47.467 12:31:20 -- setup/common.sh@33 -- # return 0 00:03:47.467 12:31:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.467 12:31:20 -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.467 12:31:20 -- setup/hugepages.sh@27 -- # local node 00:03:47.467 12:31:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.467 12:31:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.467 12:31:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.467 12:31:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.467 12:31:20 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.468 12:31:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.468 12:31:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.468 12:31:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.468 12:31:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.468 12:31:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.468 12:31:20 -- setup/common.sh@18 -- # local node=0 00:03:47.468 12:31:20 -- setup/common.sh@19 -- # local var val 00:03:47.468 12:31:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.468 12:31:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.468 12:31:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.468 12:31:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.468 12:31:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.468 12:31:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.468 12:31:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652968 kB' 'MemFree: 54332684 kB' 'MemUsed: 11320284 kB' 'SwapCached: 0 kB' 'Active: 5203048 kB' 'Inactive: 3583492 kB' 'Active(anon): 4957196 kB' 'Inactive(anon): 0 kB' 'Active(file): 245852 kB' 'Inactive(file): 3583492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7891944 kB' 'Mapped: 54076 kB' 'AnonPages: 898036 kB' 'Shmem: 4062600 kB' 'KernelStack: 14008 kB' 'PageTables: 4908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121472 kB' 'Slab: 535040 kB' 'SReclaimable: 121472 kB' 'SUnreclaim: 413568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.468 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.468 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@33 -- # echo 0 00:03:47.469 12:31:20 -- setup/common.sh@33 -- # return 0 00:03:47.469 12:31:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.469 12:31:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.469 12:31:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.469 12:31:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.469 12:31:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.469 12:31:20 -- setup/common.sh@18 -- # local node=1 00:03:47.469 12:31:20 -- setup/common.sh@19 -- # local var val 00:03:47.469 12:31:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.469 12:31:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.469 12:31:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.469 12:31:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.469 12:31:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.469 12:31:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671780 kB' 'MemFree: 55013744 kB' 'MemUsed: 5658036 kB' 'SwapCached: 0 kB' 'Active: 1944536 kB' 'Inactive: 104428 kB' 'Active(anon): 1744688 kB' 'Inactive(anon): 0 kB' 'Active(file): 199848 kB' 'Inactive(file): 104428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1666432 kB' 'Mapped: 91780 kB' 'AnonPages: 382876 kB' 'Shmem: 1362156 kB' 'KernelStack: 12968 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120024 kB' 'Slab: 580272 kB' 'SReclaimable: 120024 kB' 'SUnreclaim: 460248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.469 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.469 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # continue 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.470 12:31:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.470 12:31:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.470 12:31:20 -- setup/common.sh@33 -- # echo 0 00:03:47.470 12:31:20 -- setup/common.sh@33 -- # return 0 00:03:47.470 12:31:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.470 12:31:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.470 12:31:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.470 12:31:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.470 12:31:20 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.470 node0=512 expecting 512 00:03:47.470 12:31:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.470 12:31:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.470 12:31:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.470 12:31:20 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:47.470 node1=512 expecting 512 00:03:47.470 12:31:20 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.470 00:03:47.470 real 0m4.067s 00:03:47.470 user 0m1.582s 00:03:47.470 sys 0m2.543s 00:03:47.470 12:31:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:47.470 12:31:20 -- common/autotest_common.sh@10 -- # set +x 00:03:47.470 ************************************ 00:03:47.470 END TEST even_2G_alloc 00:03:47.470 ************************************ 00:03:47.470 12:31:20 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:47.470 12:31:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.470 12:31:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.470 12:31:20 -- common/autotest_common.sh@10 -- # set +x 00:03:47.470 ************************************ 00:03:47.470 START TEST odd_alloc 00:03:47.470 ************************************ 00:03:47.470 12:31:20 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:47.470 12:31:20 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:47.470 12:31:20 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:47.470 12:31:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.470 12:31:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.470 12:31:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:47.470 12:31:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.470 12:31:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.470 12:31:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.470 12:31:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:47.470 12:31:20 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.470 12:31:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.470 12:31:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.470 12:31:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.470 12:31:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.470 12:31:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.470 12:31:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:47.470 12:31:20 -- setup/hugepages.sh@83 -- # : 513 00:03:47.470 12:31:20 -- setup/hugepages.sh@84 -- # : 1 00:03:47.470 12:31:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.470 12:31:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:47.470 12:31:20 -- setup/hugepages.sh@83 -- # : 0 00:03:47.470 12:31:20 -- setup/hugepages.sh@84 -- # : 0 00:03:47.470 12:31:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.470 12:31:20 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:47.470 12:31:20 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:47.470 12:31:20 -- setup/hugepages.sh@160 -- # setup output 00:03:47.470 12:31:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.470 12:31:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:50.774 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.774 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.774 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.774 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.774 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.035 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.035 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.300 12:31:24 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:51.300 12:31:24 -- setup/hugepages.sh@89 -- # local node 00:03:51.300 12:31:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.300 12:31:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.300 12:31:24 -- setup/hugepages.sh@92 -- # local surp 00:03:51.300 12:31:24 -- setup/hugepages.sh@93 -- # local resv 00:03:51.300 12:31:24 -- setup/hugepages.sh@94 -- # local anon 00:03:51.300 12:31:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.300 12:31:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.300 12:31:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.300 12:31:24 -- setup/common.sh@18 -- # local node= 00:03:51.300 12:31:24 -- setup/common.sh@19 -- # local var val 00:03:51.300 12:31:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.300 12:31:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.300 12:31:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.300 12:31:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.300 12:31:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.300 12:31:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.300 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109336228 kB' 'MemAvailable: 112766300 kB' 'Buffers: 9536 kB' 'Cached: 9548944 kB' 'SwapCached: 0 kB' 'Active: 7150660 kB' 'Inactive: 3687920 kB' 'Active(anon): 6704960 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1283488 kB' 'Mapped: 145988 kB' 'Shmem: 5424860 kB' 'KReclaimable: 241424 kB' 'Slab: 1115688 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874264 kB' 'KernelStack: 27056 kB' 'PageTables: 9432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501376 kB' 'Committed_AS: 9027556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 233056 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.301 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.301 12:31:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.302 12:31:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.302 12:31:24 -- setup/common.sh@33 -- # echo 0 00:03:51.302 12:31:24 -- setup/common.sh@33 -- # return 0 00:03:51.302 12:31:24 -- setup/hugepages.sh@97 -- # anon=0 00:03:51.302 12:31:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.302 12:31:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.302 12:31:24 -- setup/common.sh@18 -- # local node= 00:03:51.302 12:31:24 -- setup/common.sh@19 -- # local var val 00:03:51.302 12:31:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.302 12:31:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.302 12:31:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.302 12:31:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.302 12:31:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.302 12:31:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.302 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109337868 kB' 'MemAvailable: 112767940 kB' 'Buffers: 9536 kB' 'Cached: 9548948 kB' 'SwapCached: 0 kB' 'Active: 7150336 kB' 'Inactive: 3687920 kB' 'Active(anon): 6704636 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1283116 kB' 'Mapped: 145988 kB' 'Shmem: 5424864 kB' 'KReclaimable: 241424 kB' 'Slab: 1115784 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874360 kB' 'KernelStack: 27072 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501376 kB' 'Committed_AS: 9029212 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 233024 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.303 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.303 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.304 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.304 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.305 12:31:24 -- setup/common.sh@33 -- # echo 0 00:03:51.305 12:31:24 -- setup/common.sh@33 -- # return 0 00:03:51.305 12:31:24 -- setup/hugepages.sh@99 -- # surp=0 00:03:51.305 12:31:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.305 12:31:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.305 12:31:24 -- setup/common.sh@18 -- # local node= 00:03:51.305 12:31:24 -- setup/common.sh@19 -- # local var val 00:03:51.305 12:31:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.305 12:31:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.305 12:31:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.305 12:31:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.305 12:31:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.305 12:31:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109335416 kB' 'MemAvailable: 112765488 kB' 'Buffers: 9536 kB' 'Cached: 9548960 kB' 'SwapCached: 0 kB' 'Active: 7150460 kB' 'Inactive: 3687920 kB' 'Active(anon): 6704760 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1283248 kB' 'Mapped: 145868 kB' 'Shmem: 5424876 kB' 'KReclaimable: 241424 kB' 'Slab: 1115784 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874360 kB' 'KernelStack: 27120 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501376 kB' 'Committed_AS: 9027584 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 233040 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.305 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.305 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.306 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.306 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.307 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.307 12:31:24 -- setup/common.sh@33 -- # echo 0 00:03:51.307 12:31:24 -- setup/common.sh@33 -- # return 0 00:03:51.307 12:31:24 -- setup/hugepages.sh@100 -- # resv=0 00:03:51.307 12:31:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:51.307 nr_hugepages=1025 00:03:51.307 12:31:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.307 resv_hugepages=0 00:03:51.307 12:31:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.307 surplus_hugepages=0 00:03:51.307 12:31:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.307 anon_hugepages=0 00:03:51.307 12:31:24 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:51.307 12:31:24 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:51.307 12:31:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.307 12:31:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.307 12:31:24 -- setup/common.sh@18 -- # local node= 00:03:51.307 12:31:24 -- setup/common.sh@19 -- # local var val 00:03:51.307 12:31:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.307 12:31:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.307 12:31:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.307 12:31:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.307 12:31:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.307 12:31:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.307 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109335684 kB' 'MemAvailable: 112765756 kB' 'Buffers: 9536 kB' 'Cached: 9548972 kB' 'SwapCached: 0 kB' 'Active: 7150724 kB' 'Inactive: 3687920 kB' 'Active(anon): 6705024 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1283552 kB' 'Mapped: 145868 kB' 'Shmem: 5424888 kB' 'KReclaimable: 241424 kB' 'Slab: 1115784 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874360 kB' 'KernelStack: 27088 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501376 kB' 'Committed_AS: 9029240 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 233072 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.308 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.308 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.571 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.571 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.571 12:31:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.571 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.571 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.571 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.571 12:31:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.571 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.571 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.571 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.571 12:31:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.572 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.572 12:31:24 -- setup/common.sh@33 -- # echo 1025 00:03:51.572 12:31:24 -- setup/common.sh@33 -- # return 0 00:03:51.572 12:31:24 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:51.572 12:31:24 -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.572 12:31:24 -- setup/hugepages.sh@27 -- # local node 00:03:51.572 12:31:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.572 12:31:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.572 12:31:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.572 12:31:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:51.572 12:31:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.572 12:31:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.572 12:31:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.572 12:31:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.572 12:31:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.572 12:31:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.572 12:31:24 -- setup/common.sh@18 -- # local node=0 00:03:51.572 12:31:24 -- setup/common.sh@19 -- # local var val 00:03:51.572 12:31:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.572 12:31:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.572 12:31:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.572 12:31:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.572 12:31:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.572 12:31:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.572 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652968 kB' 'MemFree: 54326824 kB' 'MemUsed: 11326144 kB' 'SwapCached: 0 kB' 'Active: 5200972 kB' 'Inactive: 3583492 kB' 'Active(anon): 4955120 kB' 'Inactive(anon): 0 kB' 'Active(file): 245852 kB' 'Inactive(file): 3583492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7891960 kB' 'Mapped: 54088 kB' 'AnonPages: 895696 kB' 'Shmem: 4062616 kB' 'KernelStack: 14008 kB' 'PageTables: 4804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121400 kB' 'Slab: 534956 kB' 'SReclaimable: 121400 kB' 'SUnreclaim: 413556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.573 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.573 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.573 12:31:24 -- setup/common.sh@33 -- # echo 0 00:03:51.573 12:31:24 -- setup/common.sh@33 -- # return 0 00:03:51.573 12:31:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.573 12:31:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.573 12:31:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.573 12:31:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:51.573 12:31:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.573 12:31:24 -- setup/common.sh@18 -- # local node=1 00:03:51.573 12:31:24 -- setup/common.sh@19 -- # local var val 00:03:51.573 12:31:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.574 12:31:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.574 12:31:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:51.574 12:31:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:51.574 12:31:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.574 12:31:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671780 kB' 'MemFree: 55010088 kB' 'MemUsed: 5661692 kB' 'SwapCached: 0 kB' 'Active: 1950000 kB' 'Inactive: 104428 kB' 'Active(anon): 1750152 kB' 'Inactive(anon): 0 kB' 'Active(file): 199848 kB' 'Inactive(file): 104428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1666576 kB' 'Mapped: 91780 kB' 'AnonPages: 388096 kB' 'Shmem: 1362300 kB' 'KernelStack: 13000 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120024 kB' 'Slab: 580828 kB' 'SReclaimable: 120024 kB' 'SUnreclaim: 460804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.574 12:31:24 -- setup/common.sh@32 -- # continue 00:03:51.574 12:31:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.575 12:31:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.575 12:31:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.575 12:31:24 -- setup/common.sh@33 -- # echo 0 00:03:51.575 12:31:24 -- setup/common.sh@33 -- # return 0 00:03:51.575 12:31:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.575 12:31:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.575 12:31:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.575 12:31:24 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:51.575 node0=512 expecting 513 00:03:51.575 12:31:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.575 12:31:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.575 12:31:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.575 12:31:24 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:51.575 node1=513 expecting 512 00:03:51.575 12:31:24 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:51.575 00:03:51.575 real 0m4.065s 00:03:51.575 user 0m1.626s 00:03:51.575 sys 0m2.493s 00:03:51.575 12:31:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:51.575 12:31:24 -- common/autotest_common.sh@10 -- # set +x 00:03:51.575 ************************************ 00:03:51.575 END TEST odd_alloc 00:03:51.575 ************************************ 00:03:51.575 12:31:24 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:51.575 12:31:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.575 12:31:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.575 12:31:24 -- common/autotest_common.sh@10 -- # set +x 00:03:51.575 ************************************ 00:03:51.575 START TEST custom_alloc 00:03:51.575 ************************************ 00:03:51.575 12:31:24 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:51.575 12:31:24 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:51.575 12:31:24 -- setup/hugepages.sh@169 -- # local node 00:03:51.575 12:31:24 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:51.575 12:31:24 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:51.575 12:31:24 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:51.575 12:31:24 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:51.575 12:31:24 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:51.575 12:31:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:51.575 12:31:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:51.575 12:31:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.575 12:31:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.575 12:31:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:51.575 12:31:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.575 12:31:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.575 12:31:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.575 12:31:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:51.575 12:31:24 -- setup/hugepages.sh@83 -- # : 256 00:03:51.575 12:31:24 -- setup/hugepages.sh@84 -- # : 1 00:03:51.575 12:31:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:51.575 12:31:24 -- setup/hugepages.sh@83 -- # : 0 00:03:51.575 12:31:24 -- setup/hugepages.sh@84 -- # : 0 00:03:51.575 12:31:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:51.575 12:31:24 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:51.575 12:31:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.575 12:31:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.575 12:31:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:51.575 12:31:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.575 12:31:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.575 12:31:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.575 12:31:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.575 12:31:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.575 12:31:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.575 12:31:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:51.575 12:31:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:51.575 12:31:24 -- setup/hugepages.sh@78 -- # return 0 00:03:51.575 12:31:24 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:51.575 12:31:24 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:51.575 12:31:24 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:51.575 12:31:24 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:51.575 12:31:24 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:51.575 12:31:24 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:51.575 12:31:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.575 12:31:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.575 12:31:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.575 12:31:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.575 12:31:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.575 12:31:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.575 12:31:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:51.575 12:31:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:51.575 12:31:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:51.575 12:31:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:51.575 12:31:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:51.575 12:31:24 -- setup/hugepages.sh@78 -- # return 0 00:03:51.575 12:31:24 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:51.575 12:31:24 -- setup/hugepages.sh@187 -- # setup output 00:03:51.575 12:31:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.575 12:31:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:54.888 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.149 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.149 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.410 12:31:28 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:55.410 12:31:28 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:55.410 12:31:28 -- setup/hugepages.sh@89 -- # local node 00:03:55.410 12:31:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.410 12:31:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.410 12:31:28 -- setup/hugepages.sh@92 -- # local surp 00:03:55.410 12:31:28 -- setup/hugepages.sh@93 -- # local resv 00:03:55.410 12:31:28 -- setup/hugepages.sh@94 -- # local anon 00:03:55.410 12:31:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.410 12:31:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.410 12:31:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.410 12:31:28 -- setup/common.sh@18 -- # local node= 00:03:55.410 12:31:28 -- setup/common.sh@19 -- # local var val 00:03:55.411 12:31:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.411 12:31:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.411 12:31:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.411 12:31:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.411 12:31:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.411 12:31:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 108270608 kB' 'MemAvailable: 111700680 kB' 'Buffers: 9536 kB' 'Cached: 9549092 kB' 'SwapCached: 0 kB' 'Active: 7154584 kB' 'Inactive: 3687920 kB' 'Active(anon): 6708884 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1286864 kB' 'Mapped: 145980 kB' 'Shmem: 5425008 kB' 'KReclaimable: 241424 kB' 'Slab: 1115600 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874176 kB' 'KernelStack: 26976 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978112 kB' 'Committed_AS: 9025068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232880 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.411 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.411 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.412 12:31:28 -- setup/common.sh@33 -- # echo 0 00:03:55.412 12:31:28 -- setup/common.sh@33 -- # return 0 00:03:55.412 12:31:28 -- setup/hugepages.sh@97 -- # anon=0 00:03:55.412 12:31:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.412 12:31:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.412 12:31:28 -- setup/common.sh@18 -- # local node= 00:03:55.412 12:31:28 -- setup/common.sh@19 -- # local var val 00:03:55.412 12:31:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.412 12:31:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.412 12:31:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.412 12:31:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.412 12:31:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.412 12:31:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 108272272 kB' 'MemAvailable: 111702344 kB' 'Buffers: 9536 kB' 'Cached: 9549096 kB' 'SwapCached: 0 kB' 'Active: 7154716 kB' 'Inactive: 3687920 kB' 'Active(anon): 6709016 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1286996 kB' 'Mapped: 145968 kB' 'Shmem: 5425012 kB' 'KReclaimable: 241424 kB' 'Slab: 1115592 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874168 kB' 'KernelStack: 26960 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978112 kB' 'Committed_AS: 9025080 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232864 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.412 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.412 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.413 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.413 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.414 12:31:28 -- setup/common.sh@33 -- # echo 0 00:03:55.414 12:31:28 -- setup/common.sh@33 -- # return 0 00:03:55.414 12:31:28 -- setup/hugepages.sh@99 -- # surp=0 00:03:55.414 12:31:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.414 12:31:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.414 12:31:28 -- setup/common.sh@18 -- # local node= 00:03:55.414 12:31:28 -- setup/common.sh@19 -- # local var val 00:03:55.414 12:31:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.414 12:31:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.414 12:31:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.414 12:31:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.414 12:31:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.414 12:31:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.414 12:31:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 108272904 kB' 'MemAvailable: 111702976 kB' 'Buffers: 9536 kB' 'Cached: 9549108 kB' 'SwapCached: 0 kB' 'Active: 7154240 kB' 'Inactive: 3687920 kB' 'Active(anon): 6708540 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1286972 kB' 'Mapped: 145892 kB' 'Shmem: 5425024 kB' 'KReclaimable: 241424 kB' 'Slab: 1115596 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874172 kB' 'KernelStack: 26960 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978112 kB' 'Committed_AS: 9025096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232864 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.414 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.414 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.415 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.415 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.416 12:31:28 -- setup/common.sh@33 -- # echo 0 00:03:55.416 12:31:28 -- setup/common.sh@33 -- # return 0 00:03:55.416 12:31:28 -- setup/hugepages.sh@100 -- # resv=0 00:03:55.416 12:31:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:55.416 nr_hugepages=1536 00:03:55.416 12:31:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.416 resv_hugepages=0 00:03:55.416 12:31:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.416 surplus_hugepages=0 00:03:55.416 12:31:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.416 anon_hugepages=0 00:03:55.416 12:31:28 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:55.416 12:31:28 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:55.416 12:31:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.416 12:31:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.416 12:31:28 -- setup/common.sh@18 -- # local node= 00:03:55.416 12:31:28 -- setup/common.sh@19 -- # local var val 00:03:55.416 12:31:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.416 12:31:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.416 12:31:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.416 12:31:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.416 12:31:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.416 12:31:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 108273516 kB' 'MemAvailable: 111703588 kB' 'Buffers: 9536 kB' 'Cached: 9549120 kB' 'SwapCached: 0 kB' 'Active: 7154360 kB' 'Inactive: 3687920 kB' 'Active(anon): 6708660 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1287024 kB' 'Mapped: 145892 kB' 'Shmem: 5425036 kB' 'KReclaimable: 241424 kB' 'Slab: 1115596 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874172 kB' 'KernelStack: 26944 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978112 kB' 'Committed_AS: 9025108 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232864 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.677 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.677 12:31:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.678 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.678 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.678 12:31:28 -- setup/common.sh@33 -- # echo 1536 00:03:55.678 12:31:28 -- setup/common.sh@33 -- # return 0 00:03:55.678 12:31:28 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:55.678 12:31:28 -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.678 12:31:28 -- setup/hugepages.sh@27 -- # local node 00:03:55.678 12:31:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.678 12:31:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.678 12:31:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.679 12:31:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.679 12:31:28 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.679 12:31:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.679 12:31:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.679 12:31:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.679 12:31:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.679 12:31:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.679 12:31:28 -- setup/common.sh@18 -- # local node=0 00:03:55.679 12:31:28 -- setup/common.sh@19 -- # local var val 00:03:55.679 12:31:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.679 12:31:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.679 12:31:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.679 12:31:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.679 12:31:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.679 12:31:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652968 kB' 'MemFree: 54306228 kB' 'MemUsed: 11346740 kB' 'SwapCached: 0 kB' 'Active: 5201240 kB' 'Inactive: 3583492 kB' 'Active(anon): 4955388 kB' 'Inactive(anon): 0 kB' 'Active(file): 245852 kB' 'Inactive(file): 3583492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7891960 kB' 'Mapped: 54108 kB' 'AnonPages: 896020 kB' 'Shmem: 4062616 kB' 'KernelStack: 13976 kB' 'PageTables: 4756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121400 kB' 'Slab: 534688 kB' 'SReclaimable: 121400 kB' 'SUnreclaim: 413288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.679 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.679 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@33 -- # echo 0 00:03:55.680 12:31:28 -- setup/common.sh@33 -- # return 0 00:03:55.680 12:31:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.680 12:31:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.680 12:31:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.680 12:31:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.680 12:31:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.680 12:31:28 -- setup/common.sh@18 -- # local node=1 00:03:55.680 12:31:28 -- setup/common.sh@19 -- # local var val 00:03:55.680 12:31:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.680 12:31:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.680 12:31:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.680 12:31:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.680 12:31:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.680 12:31:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671780 kB' 'MemFree: 53967548 kB' 'MemUsed: 6704232 kB' 'SwapCached: 0 kB' 'Active: 1953300 kB' 'Inactive: 104428 kB' 'Active(anon): 1753452 kB' 'Inactive(anon): 0 kB' 'Active(file): 199848 kB' 'Inactive(file): 104428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1666736 kB' 'Mapped: 91784 kB' 'AnonPages: 391144 kB' 'Shmem: 1362460 kB' 'KernelStack: 12968 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120024 kB' 'Slab: 580908 kB' 'SReclaimable: 120024 kB' 'SUnreclaim: 460884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.680 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.680 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # continue 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 12:31:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 12:31:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.681 12:31:28 -- setup/common.sh@33 -- # echo 0 00:03:55.681 12:31:28 -- setup/common.sh@33 -- # return 0 00:03:55.681 12:31:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.681 12:31:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.681 12:31:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.681 12:31:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.681 12:31:28 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:55.681 node0=512 expecting 512 00:03:55.681 12:31:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.681 12:31:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.681 12:31:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.681 12:31:28 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:55.681 node1=1024 expecting 1024 00:03:55.681 12:31:28 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:55.681 00:03:55.681 real 0m4.076s 00:03:55.681 user 0m1.585s 00:03:55.681 sys 0m2.545s 00:03:55.681 12:31:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:55.681 12:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:55.681 ************************************ 00:03:55.681 END TEST custom_alloc 00:03:55.681 ************************************ 00:03:55.681 12:31:28 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:55.681 12:31:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.681 12:31:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.681 12:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:55.681 ************************************ 00:03:55.681 START TEST no_shrink_alloc 00:03:55.681 ************************************ 00:03:55.681 12:31:28 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:55.681 12:31:28 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:55.681 12:31:28 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.681 12:31:28 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:55.681 12:31:28 -- setup/hugepages.sh@51 -- # shift 00:03:55.681 12:31:28 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:55.681 12:31:28 -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.681 12:31:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.681 12:31:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.681 12:31:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.681 12:31:28 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:55.681 12:31:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.681 12:31:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.681 12:31:28 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.681 12:31:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.681 12:31:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.681 12:31:28 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.681 12:31:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.681 12:31:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:55.681 12:31:28 -- setup/hugepages.sh@73 -- # return 0 00:03:55.681 12:31:28 -- setup/hugepages.sh@198 -- # setup output 00:03:55.681 12:31:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.681 12:31:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:59.891 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.891 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.891 12:31:32 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:59.891 12:31:32 -- setup/hugepages.sh@89 -- # local node 00:03:59.891 12:31:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.891 12:31:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.891 12:31:32 -- setup/hugepages.sh@92 -- # local surp 00:03:59.891 12:31:32 -- setup/hugepages.sh@93 -- # local resv 00:03:59.891 12:31:32 -- setup/hugepages.sh@94 -- # local anon 00:03:59.891 12:31:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.891 12:31:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.891 12:31:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.891 12:31:32 -- setup/common.sh@18 -- # local node= 00:03:59.891 12:31:32 -- setup/common.sh@19 -- # local var val 00:03:59.891 12:31:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.892 12:31:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.892 12:31:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.892 12:31:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.892 12:31:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.892 12:31:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109305984 kB' 'MemAvailable: 112736040 kB' 'Buffers: 9536 kB' 'Cached: 9549240 kB' 'SwapCached: 0 kB' 'Active: 7159300 kB' 'Inactive: 3687920 kB' 'Active(anon): 6713600 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1291692 kB' 'Mapped: 145848 kB' 'Shmem: 5425156 kB' 'KReclaimable: 241392 kB' 'Slab: 1115580 kB' 'SReclaimable: 241392 kB' 'SUnreclaim: 874188 kB' 'KernelStack: 26992 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9030552 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232912 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.892 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.893 12:31:32 -- setup/common.sh@33 -- # echo 0 00:03:59.893 12:31:32 -- setup/common.sh@33 -- # return 0 00:03:59.893 12:31:32 -- setup/hugepages.sh@97 -- # anon=0 00:03:59.893 12:31:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.893 12:31:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.893 12:31:32 -- setup/common.sh@18 -- # local node= 00:03:59.893 12:31:32 -- setup/common.sh@19 -- # local var val 00:03:59.893 12:31:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.893 12:31:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.893 12:31:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.893 12:31:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.893 12:31:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.893 12:31:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109306104 kB' 'MemAvailable: 112736160 kB' 'Buffers: 9536 kB' 'Cached: 9549248 kB' 'SwapCached: 0 kB' 'Active: 7159176 kB' 'Inactive: 3687920 kB' 'Active(anon): 6713476 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1291708 kB' 'Mapped: 145748 kB' 'Shmem: 5425164 kB' 'KReclaimable: 241392 kB' 'Slab: 1115580 kB' 'SReclaimable: 241392 kB' 'SUnreclaim: 874188 kB' 'KernelStack: 26976 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9028924 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232864 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.893 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.893 12:31:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.894 12:31:32 -- setup/common.sh@33 -- # echo 0 00:03:59.894 12:31:32 -- setup/common.sh@33 -- # return 0 00:03:59.894 12:31:32 -- setup/hugepages.sh@99 -- # surp=0 00:03:59.894 12:31:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.894 12:31:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.894 12:31:32 -- setup/common.sh@18 -- # local node= 00:03:59.894 12:31:32 -- setup/common.sh@19 -- # local var val 00:03:59.894 12:31:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.894 12:31:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.894 12:31:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.894 12:31:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.894 12:31:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.894 12:31:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109309368 kB' 'MemAvailable: 112739424 kB' 'Buffers: 9536 kB' 'Cached: 9549264 kB' 'SwapCached: 0 kB' 'Active: 7160200 kB' 'Inactive: 3687920 kB' 'Active(anon): 6714500 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1292848 kB' 'Mapped: 145764 kB' 'Shmem: 5425180 kB' 'KReclaimable: 241392 kB' 'Slab: 1115580 kB' 'SReclaimable: 241392 kB' 'SUnreclaim: 874188 kB' 'KernelStack: 27104 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9030588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232976 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.894 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.894 12:31:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.895 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.895 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.896 12:31:32 -- setup/common.sh@33 -- # echo 0 00:03:59.896 12:31:32 -- setup/common.sh@33 -- # return 0 00:03:59.896 12:31:32 -- setup/hugepages.sh@100 -- # resv=0 00:03:59.896 12:31:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.896 nr_hugepages=1024 00:03:59.896 12:31:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.896 resv_hugepages=0 00:03:59.896 12:31:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.896 surplus_hugepages=0 00:03:59.896 12:31:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.896 anon_hugepages=0 00:03:59.896 12:31:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.896 12:31:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.896 12:31:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.896 12:31:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.896 12:31:32 -- setup/common.sh@18 -- # local node= 00:03:59.896 12:31:32 -- setup/common.sh@19 -- # local var val 00:03:59.896 12:31:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.896 12:31:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.896 12:31:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.896 12:31:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.896 12:31:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.896 12:31:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109309888 kB' 'MemAvailable: 112739944 kB' 'Buffers: 9536 kB' 'Cached: 9549280 kB' 'SwapCached: 0 kB' 'Active: 7159280 kB' 'Inactive: 3687920 kB' 'Active(anon): 6713580 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1291828 kB' 'Mapped: 145748 kB' 'Shmem: 5425196 kB' 'KReclaimable: 241392 kB' 'Slab: 1115580 kB' 'SReclaimable: 241392 kB' 'SUnreclaim: 874188 kB' 'KernelStack: 27104 kB' 'PageTables: 9428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9030732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232944 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.896 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.896 12:31:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.897 12:31:32 -- setup/common.sh@33 -- # echo 1024 00:03:59.897 12:31:32 -- setup/common.sh@33 -- # return 0 00:03:59.897 12:31:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.897 12:31:32 -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.897 12:31:32 -- setup/hugepages.sh@27 -- # local node 00:03:59.897 12:31:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.897 12:31:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.897 12:31:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.897 12:31:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.897 12:31:32 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.897 12:31:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.897 12:31:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.897 12:31:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.897 12:31:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.897 12:31:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.897 12:31:32 -- setup/common.sh@18 -- # local node=0 00:03:59.897 12:31:32 -- setup/common.sh@19 -- # local var val 00:03:59.897 12:31:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.897 12:31:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.897 12:31:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.897 12:31:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.897 12:31:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.897 12:31:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652968 kB' 'MemFree: 53242316 kB' 'MemUsed: 12410652 kB' 'SwapCached: 0 kB' 'Active: 5202640 kB' 'Inactive: 3583492 kB' 'Active(anon): 4956788 kB' 'Inactive(anon): 0 kB' 'Active(file): 245852 kB' 'Inactive(file): 3583492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7892000 kB' 'Mapped: 53968 kB' 'AnonPages: 897364 kB' 'Shmem: 4062656 kB' 'KernelStack: 14168 kB' 'PageTables: 4900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121368 kB' 'Slab: 534624 kB' 'SReclaimable: 121368 kB' 'SUnreclaim: 413256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.897 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.897 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # continue 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.898 12:31:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.898 12:31:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.898 12:31:32 -- setup/common.sh@33 -- # echo 0 00:03:59.898 12:31:32 -- setup/common.sh@33 -- # return 0 00:03:59.898 12:31:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.898 12:31:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.898 12:31:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.898 12:31:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.898 12:31:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.898 node0=1024 expecting 1024 00:03:59.898 12:31:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.898 12:31:32 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:59.898 12:31:32 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:59.898 12:31:32 -- setup/hugepages.sh@202 -- # setup output 00:03:59.898 12:31:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.898 12:31:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:03.205 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:03.205 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.205 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.468 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:03.468 12:31:36 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:03.468 12:31:36 -- setup/hugepages.sh@89 -- # local node 00:04:03.468 12:31:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.468 12:31:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.468 12:31:36 -- setup/hugepages.sh@92 -- # local surp 00:04:03.468 12:31:36 -- setup/hugepages.sh@93 -- # local resv 00:04:03.468 12:31:36 -- setup/hugepages.sh@94 -- # local anon 00:04:03.468 12:31:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.468 12:31:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.468 12:31:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.468 12:31:36 -- setup/common.sh@18 -- # local node= 00:04:03.468 12:31:36 -- setup/common.sh@19 -- # local var val 00:04:03.468 12:31:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.468 12:31:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.468 12:31:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.468 12:31:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.468 12:31:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.468 12:31:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109322756 kB' 'MemAvailable: 112752828 kB' 'Buffers: 9536 kB' 'Cached: 9549388 kB' 'SwapCached: 0 kB' 'Active: 7162720 kB' 'Inactive: 3687920 kB' 'Active(anon): 6717020 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1295036 kB' 'Mapped: 145992 kB' 'Shmem: 5425304 kB' 'KReclaimable: 241424 kB' 'Slab: 1116368 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874944 kB' 'KernelStack: 26848 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9031848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232976 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.468 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.468 12:31:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.469 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.469 12:31:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.469 12:31:36 -- setup/common.sh@33 -- # echo 0 00:04:03.469 12:31:36 -- setup/common.sh@33 -- # return 0 00:04:03.469 12:31:36 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.469 12:31:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.469 12:31:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.469 12:31:36 -- setup/common.sh@18 -- # local node= 00:04:03.469 12:31:36 -- setup/common.sh@19 -- # local var val 00:04:03.469 12:31:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.469 12:31:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.469 12:31:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.469 12:31:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.469 12:31:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.735 12:31:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 12:31:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109323568 kB' 'MemAvailable: 112753640 kB' 'Buffers: 9536 kB' 'Cached: 9549392 kB' 'SwapCached: 0 kB' 'Active: 7163520 kB' 'Inactive: 3687920 kB' 'Active(anon): 6717820 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1295832 kB' 'Mapped: 145916 kB' 'Shmem: 5425308 kB' 'KReclaimable: 241424 kB' 'Slab: 1115944 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874520 kB' 'KernelStack: 27056 kB' 'PageTables: 9548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9031860 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232992 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 12:31:36 -- setup/common.sh@33 -- # echo 0 00:04:03.737 12:31:36 -- setup/common.sh@33 -- # return 0 00:04:03.737 12:31:36 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.737 12:31:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.737 12:31:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.737 12:31:36 -- setup/common.sh@18 -- # local node= 00:04:03.737 12:31:36 -- setup/common.sh@19 -- # local var val 00:04:03.737 12:31:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.737 12:31:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.737 12:31:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.737 12:31:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.737 12:31:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.737 12:31:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.737 12:31:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109320796 kB' 'MemAvailable: 112750868 kB' 'Buffers: 9536 kB' 'Cached: 9549404 kB' 'SwapCached: 0 kB' 'Active: 7164048 kB' 'Inactive: 3687920 kB' 'Active(anon): 6718348 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1296308 kB' 'Mapped: 145916 kB' 'Shmem: 5425320 kB' 'KReclaimable: 241424 kB' 'Slab: 1115944 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874520 kB' 'KernelStack: 27200 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9031876 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 233056 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 12:31:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 12:31:36 -- setup/common.sh@33 -- # echo 0 00:04:03.738 12:31:36 -- setup/common.sh@33 -- # return 0 00:04:03.738 12:31:36 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.738 12:31:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.738 nr_hugepages=1024 00:04:03.738 12:31:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.738 resv_hugepages=0 00:04:03.738 12:31:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.738 surplus_hugepages=0 00:04:03.738 12:31:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.738 anon_hugepages=0 00:04:03.738 12:31:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.738 12:31:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.738 12:31:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.738 12:31:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.738 12:31:36 -- setup/common.sh@18 -- # local node= 00:04:03.738 12:31:36 -- setup/common.sh@19 -- # local var val 00:04:03.738 12:31:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.738 12:31:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.738 12:31:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.738 12:31:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.738 12:31:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.738 12:31:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324748 kB' 'MemFree: 109321756 kB' 'MemAvailable: 112751828 kB' 'Buffers: 9536 kB' 'Cached: 9549416 kB' 'SwapCached: 0 kB' 'Active: 7163836 kB' 'Inactive: 3687920 kB' 'Active(anon): 6718136 kB' 'Inactive(anon): 0 kB' 'Active(file): 445700 kB' 'Inactive(file): 3687920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1296080 kB' 'Mapped: 145916 kB' 'Shmem: 5425332 kB' 'KReclaimable: 241424 kB' 'Slab: 1115944 kB' 'SReclaimable: 241424 kB' 'SUnreclaim: 874520 kB' 'KernelStack: 27024 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502400 kB' 'Committed_AS: 9026960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 232992 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 726392 kB' 'DirectMap2M: 10487808 kB' 'DirectMap1G: 125829120 kB' 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.738 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.738 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.739 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 12:31:36 -- setup/common.sh@33 -- # echo 1024 00:04:03.740 12:31:36 -- setup/common.sh@33 -- # return 0 00:04:03.740 12:31:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.740 12:31:36 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.740 12:31:36 -- setup/hugepages.sh@27 -- # local node 00:04:03.740 12:31:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.740 12:31:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.740 12:31:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.740 12:31:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.740 12:31:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.740 12:31:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.740 12:31:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.740 12:31:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.740 12:31:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.740 12:31:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.740 12:31:36 -- setup/common.sh@18 -- # local node=0 00:04:03.740 12:31:36 -- setup/common.sh@19 -- # local var val 00:04:03.740 12:31:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.740 12:31:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.740 12:31:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.740 12:31:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.740 12:31:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.740 12:31:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652968 kB' 'MemFree: 53253344 kB' 'MemUsed: 12399624 kB' 'SwapCached: 0 kB' 'Active: 5202720 kB' 'Inactive: 3583492 kB' 'Active(anon): 4956868 kB' 'Inactive(anon): 0 kB' 'Active(file): 245852 kB' 'Inactive(file): 3583492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7892104 kB' 'Mapped: 54136 kB' 'AnonPages: 897316 kB' 'Shmem: 4062760 kB' 'KernelStack: 13992 kB' 'PageTables: 4720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121368 kB' 'Slab: 534752 kB' 'SReclaimable: 121368 kB' 'SUnreclaim: 413384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.740 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.740 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # continue 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 12:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 12:31:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.741 12:31:36 -- setup/common.sh@33 -- # echo 0 00:04:03.741 12:31:36 -- setup/common.sh@33 -- # return 0 00:04:03.741 12:31:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.741 12:31:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.741 12:31:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.741 12:31:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.741 12:31:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.741 node0=1024 expecting 1024 00:04:03.741 12:31:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.741 00:04:03.741 real 0m8.054s 00:04:03.741 user 0m3.120s 00:04:03.741 sys 0m5.046s 00:04:03.741 12:31:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.741 12:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:03.741 ************************************ 00:04:03.741 END TEST no_shrink_alloc 00:04:03.741 ************************************ 00:04:03.741 12:31:36 -- setup/hugepages.sh@217 -- # clear_hp 00:04:03.741 12:31:36 -- setup/hugepages.sh@37 -- # local node hp 00:04:03.741 12:31:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.741 12:31:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.741 12:31:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:03.741 12:31:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.741 12:31:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:03.741 12:31:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.741 12:31:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.741 12:31:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:03.741 12:31:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.741 12:31:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:03.741 12:31:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:03.741 12:31:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:03.741 00:04:03.741 real 0m29.109s 00:04:03.741 user 0m11.359s 00:04:03.741 sys 0m18.136s 00:04:03.741 12:31:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.741 12:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:03.741 ************************************ 00:04:03.741 END TEST hugepages 00:04:03.741 ************************************ 00:04:03.741 12:31:36 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:03.741 12:31:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.741 12:31:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.741 12:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:03.741 ************************************ 00:04:03.741 START TEST driver 00:04:03.741 ************************************ 00:04:03.741 12:31:36 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:04.004 * Looking for test storage... 00:04:04.004 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:04.004 12:31:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:04.004 12:31:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:04.004 12:31:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:04.004 12:31:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:04.004 12:31:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:04.004 12:31:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:04.004 12:31:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:04.004 12:31:36 -- scripts/common.sh@335 -- # IFS=.-: 00:04:04.004 12:31:36 -- scripts/common.sh@335 -- # read -ra ver1 00:04:04.004 12:31:36 -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.004 12:31:36 -- scripts/common.sh@336 -- # read -ra ver2 00:04:04.004 12:31:36 -- scripts/common.sh@337 -- # local 'op=<' 00:04:04.004 12:31:36 -- scripts/common.sh@339 -- # ver1_l=2 00:04:04.004 12:31:36 -- scripts/common.sh@340 -- # ver2_l=1 00:04:04.004 12:31:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:04.004 12:31:36 -- scripts/common.sh@343 -- # case "$op" in 00:04:04.004 12:31:36 -- scripts/common.sh@344 -- # : 1 00:04:04.004 12:31:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:04.004 12:31:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.004 12:31:36 -- scripts/common.sh@364 -- # decimal 1 00:04:04.004 12:31:36 -- scripts/common.sh@352 -- # local d=1 00:04:04.004 12:31:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.004 12:31:36 -- scripts/common.sh@354 -- # echo 1 00:04:04.004 12:31:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:04.004 12:31:36 -- scripts/common.sh@365 -- # decimal 2 00:04:04.004 12:31:36 -- scripts/common.sh@352 -- # local d=2 00:04:04.004 12:31:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.004 12:31:36 -- scripts/common.sh@354 -- # echo 2 00:04:04.004 12:31:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:04.004 12:31:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:04.004 12:31:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:04.004 12:31:36 -- scripts/common.sh@367 -- # return 0 00:04:04.004 12:31:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.004 12:31:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:04.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.004 --rc genhtml_branch_coverage=1 00:04:04.004 --rc genhtml_function_coverage=1 00:04:04.004 --rc genhtml_legend=1 00:04:04.004 --rc geninfo_all_blocks=1 00:04:04.004 --rc geninfo_unexecuted_blocks=1 00:04:04.004 00:04:04.004 ' 00:04:04.004 12:31:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:04.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.004 --rc genhtml_branch_coverage=1 00:04:04.004 --rc genhtml_function_coverage=1 00:04:04.004 --rc genhtml_legend=1 00:04:04.004 --rc geninfo_all_blocks=1 00:04:04.004 --rc geninfo_unexecuted_blocks=1 00:04:04.004 00:04:04.004 ' 00:04:04.004 12:31:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:04.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.004 --rc genhtml_branch_coverage=1 00:04:04.004 --rc genhtml_function_coverage=1 00:04:04.004 --rc genhtml_legend=1 00:04:04.004 --rc geninfo_all_blocks=1 00:04:04.004 --rc geninfo_unexecuted_blocks=1 00:04:04.004 00:04:04.004 ' 00:04:04.004 12:31:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:04.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.004 --rc genhtml_branch_coverage=1 00:04:04.004 --rc genhtml_function_coverage=1 00:04:04.004 --rc genhtml_legend=1 00:04:04.004 --rc geninfo_all_blocks=1 00:04:04.004 --rc geninfo_unexecuted_blocks=1 00:04:04.004 00:04:04.004 ' 00:04:04.004 12:31:36 -- setup/driver.sh@68 -- # setup reset 00:04:04.004 12:31:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.004 12:31:37 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.305 12:31:42 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:09.305 12:31:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.305 12:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.305 12:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:09.305 ************************************ 00:04:09.305 START TEST guess_driver 00:04:09.305 ************************************ 00:04:09.305 12:31:42 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:09.305 12:31:42 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:09.305 12:31:42 -- setup/driver.sh@47 -- # local fail=0 00:04:09.305 12:31:42 -- setup/driver.sh@49 -- # pick_driver 00:04:09.305 12:31:42 -- setup/driver.sh@36 -- # vfio 00:04:09.305 12:31:42 -- setup/driver.sh@21 -- # local iommu_grups 00:04:09.305 12:31:42 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:09.305 12:31:42 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:09.305 12:31:42 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:09.305 12:31:42 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:09.305 12:31:42 -- setup/driver.sh@29 -- # (( 319 > 0 )) 00:04:09.305 12:31:42 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:09.305 12:31:42 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:09.305 12:31:42 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:09.305 12:31:42 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:09.305 12:31:42 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:09.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:09.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:09.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:09.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:09.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:09.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:09.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:09.305 12:31:42 -- setup/driver.sh@30 -- # return 0 00:04:09.305 12:31:42 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:09.305 12:31:42 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:09.305 12:31:42 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:09.305 12:31:42 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:09.305 Looking for driver=vfio-pci 00:04:09.305 12:31:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.305 12:31:42 -- setup/driver.sh@45 -- # setup output config 00:04:09.305 12:31:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.305 12:31:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.520 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.520 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.520 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.521 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.521 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.521 12:31:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.521 12:31:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.521 12:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.521 12:31:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.521 12:31:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.521 12:31:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.521 12:31:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.521 12:31:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.521 12:31:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.521 12:31:46 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:13.521 12:31:46 -- setup/driver.sh@65 -- # setup reset 00:04:13.521 12:31:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.521 12:31:46 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.817 00:04:18.817 real 0m9.381s 00:04:18.817 user 0m3.011s 00:04:18.817 sys 0m5.548s 00:04:18.817 12:31:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.817 12:31:51 -- common/autotest_common.sh@10 -- # set +x 00:04:18.817 ************************************ 00:04:18.817 END TEST guess_driver 00:04:18.817 ************************************ 00:04:18.817 00:04:18.817 real 0m14.861s 00:04:18.817 user 0m4.673s 00:04:18.817 sys 0m8.536s 00:04:18.817 12:31:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.817 12:31:51 -- common/autotest_common.sh@10 -- # set +x 00:04:18.817 ************************************ 00:04:18.817 END TEST driver 00:04:18.817 ************************************ 00:04:18.817 12:31:51 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:18.817 12:31:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.817 12:31:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.817 12:31:51 -- common/autotest_common.sh@10 -- # set +x 00:04:18.817 ************************************ 00:04:18.817 START TEST devices 00:04:18.817 ************************************ 00:04:18.817 12:31:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:18.817 * Looking for test storage... 00:04:18.817 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:18.817 12:31:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:18.817 12:31:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:18.817 12:31:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:18.817 12:31:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:18.817 12:31:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:18.817 12:31:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:18.817 12:31:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:18.817 12:31:51 -- scripts/common.sh@335 -- # IFS=.-: 00:04:18.817 12:31:51 -- scripts/common.sh@335 -- # read -ra ver1 00:04:18.817 12:31:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.817 12:31:51 -- scripts/common.sh@336 -- # read -ra ver2 00:04:18.817 12:31:51 -- scripts/common.sh@337 -- # local 'op=<' 00:04:18.817 12:31:51 -- scripts/common.sh@339 -- # ver1_l=2 00:04:18.817 12:31:51 -- scripts/common.sh@340 -- # ver2_l=1 00:04:18.817 12:31:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:18.817 12:31:51 -- scripts/common.sh@343 -- # case "$op" in 00:04:18.817 12:31:51 -- scripts/common.sh@344 -- # : 1 00:04:18.817 12:31:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:18.817 12:31:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.817 12:31:51 -- scripts/common.sh@364 -- # decimal 1 00:04:18.817 12:31:51 -- scripts/common.sh@352 -- # local d=1 00:04:18.817 12:31:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.817 12:31:51 -- scripts/common.sh@354 -- # echo 1 00:04:18.817 12:31:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:18.817 12:31:51 -- scripts/common.sh@365 -- # decimal 2 00:04:18.817 12:31:51 -- scripts/common.sh@352 -- # local d=2 00:04:18.817 12:31:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.817 12:31:51 -- scripts/common.sh@354 -- # echo 2 00:04:18.817 12:31:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:18.817 12:31:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:18.817 12:31:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:18.817 12:31:51 -- scripts/common.sh@367 -- # return 0 00:04:18.817 12:31:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.817 12:31:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:18.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.817 --rc genhtml_branch_coverage=1 00:04:18.817 --rc genhtml_function_coverage=1 00:04:18.817 --rc genhtml_legend=1 00:04:18.817 --rc geninfo_all_blocks=1 00:04:18.817 --rc geninfo_unexecuted_blocks=1 00:04:18.817 00:04:18.817 ' 00:04:18.817 12:31:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:18.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.817 --rc genhtml_branch_coverage=1 00:04:18.817 --rc genhtml_function_coverage=1 00:04:18.817 --rc genhtml_legend=1 00:04:18.817 --rc geninfo_all_blocks=1 00:04:18.817 --rc geninfo_unexecuted_blocks=1 00:04:18.817 00:04:18.817 ' 00:04:18.817 12:31:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:18.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.817 --rc genhtml_branch_coverage=1 00:04:18.817 --rc genhtml_function_coverage=1 00:04:18.817 --rc genhtml_legend=1 00:04:18.817 --rc geninfo_all_blocks=1 00:04:18.817 --rc geninfo_unexecuted_blocks=1 00:04:18.817 00:04:18.817 ' 00:04:18.817 12:31:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:18.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.817 --rc genhtml_branch_coverage=1 00:04:18.817 --rc genhtml_function_coverage=1 00:04:18.818 --rc genhtml_legend=1 00:04:18.818 --rc geninfo_all_blocks=1 00:04:18.818 --rc geninfo_unexecuted_blocks=1 00:04:18.818 00:04:18.818 ' 00:04:18.818 12:31:51 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:18.818 12:31:51 -- setup/devices.sh@192 -- # setup reset 00:04:18.818 12:31:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.818 12:31:51 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.035 12:31:56 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:23.035 12:31:56 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:23.035 12:31:56 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:23.035 12:31:56 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:23.035 12:31:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:23.035 12:31:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:23.035 12:31:56 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:23.035 12:31:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.035 12:31:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:23.035 12:31:56 -- setup/devices.sh@196 -- # blocks=() 00:04:23.035 12:31:56 -- setup/devices.sh@196 -- # declare -a blocks 00:04:23.035 12:31:56 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:23.035 12:31:56 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:23.035 12:31:56 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:23.035 12:31:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.035 12:31:56 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:23.035 12:31:56 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:23.035 12:31:56 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:23.035 12:31:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:23.035 12:31:56 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:23.035 12:31:56 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:23.035 12:31:56 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:23.297 No valid GPT data, bailing 00:04:23.297 12:31:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.297 12:31:56 -- scripts/common.sh@393 -- # pt= 00:04:23.297 12:31:56 -- scripts/common.sh@394 -- # return 1 00:04:23.297 12:31:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:23.297 12:31:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:23.297 12:31:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:23.297 12:31:56 -- setup/common.sh@80 -- # echo 1920383410176 00:04:23.297 12:31:56 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:23.297 12:31:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.297 12:31:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:23.297 12:31:56 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:23.297 12:31:56 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:23.297 12:31:56 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:23.297 12:31:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.297 12:31:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.297 12:31:56 -- common/autotest_common.sh@10 -- # set +x 00:04:23.297 ************************************ 00:04:23.297 START TEST nvme_mount 00:04:23.297 ************************************ 00:04:23.297 12:31:56 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:23.297 12:31:56 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:23.297 12:31:56 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:23.297 12:31:56 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.297 12:31:56 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.297 12:31:56 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:23.297 12:31:56 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.297 12:31:56 -- setup/common.sh@40 -- # local part_no=1 00:04:23.297 12:31:56 -- setup/common.sh@41 -- # local size=1073741824 00:04:23.297 12:31:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.297 12:31:56 -- setup/common.sh@44 -- # parts=() 00:04:23.297 12:31:56 -- setup/common.sh@44 -- # local parts 00:04:23.298 12:31:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.298 12:31:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.298 12:31:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.298 12:31:56 -- setup/common.sh@46 -- # (( part++ )) 00:04:23.298 12:31:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.298 12:31:56 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:23.298 12:31:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.298 12:31:56 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:24.241 Creating new GPT entries in memory. 00:04:24.241 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.241 other utilities. 00:04:24.241 12:31:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.241 12:31:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.241 12:31:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.241 12:31:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.241 12:31:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:25.185 Creating new GPT entries in memory. 00:04:25.185 The operation has completed successfully. 00:04:25.185 12:31:58 -- setup/common.sh@57 -- # (( part++ )) 00:04:25.185 12:31:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.185 12:31:58 -- setup/common.sh@62 -- # wait 289797 00:04:25.447 12:31:58 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.447 12:31:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:25.447 12:31:58 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.447 12:31:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:25.447 12:31:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:25.447 12:31:58 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.447 12:31:58 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.447 12:31:58 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:25.447 12:31:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:25.447 12:31:58 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.447 12:31:58 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.447 12:31:58 -- setup/devices.sh@53 -- # local found=0 00:04:25.447 12:31:58 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.447 12:31:58 -- setup/devices.sh@56 -- # : 00:04:25.447 12:31:58 -- setup/devices.sh@59 -- # local pci status 00:04:25.447 12:31:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.447 12:31:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:25.447 12:31:58 -- setup/devices.sh@47 -- # setup output config 00:04:25.447 12:31:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.447 12:31:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:28.756 12:32:01 -- setup/devices.sh@63 -- # found=1 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.756 12:32:01 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.756 12:32:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.330 12:32:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.330 12:32:02 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:29.330 12:32:02 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.330 12:32:02 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.330 12:32:02 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.330 12:32:02 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:29.330 12:32:02 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.330 12:32:02 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.330 12:32:02 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.330 12:32:02 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:29.330 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.330 12:32:02 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.330 12:32:02 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.591 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:29.591 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:29.591 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.591 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.591 12:32:02 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:29.591 12:32:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:29.591 12:32:02 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.591 12:32:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:29.591 12:32:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:29.591 12:32:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.592 12:32:02 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.592 12:32:02 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.592 12:32:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:29.592 12:32:02 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.592 12:32:02 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.592 12:32:02 -- setup/devices.sh@53 -- # local found=0 00:04:29.592 12:32:02 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.592 12:32:02 -- setup/devices.sh@56 -- # : 00:04:29.592 12:32:02 -- setup/devices.sh@59 -- # local pci status 00:04:29.592 12:32:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.592 12:32:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.592 12:32:02 -- setup/devices.sh@47 -- # setup output config 00:04:29.592 12:32:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.592 12:32:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:32.896 12:32:05 -- setup/devices.sh@63 -- # found=1 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.896 12:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.896 12:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.468 12:32:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.468 12:32:06 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:33.468 12:32:06 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.468 12:32:06 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.468 12:32:06 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.468 12:32:06 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.468 12:32:06 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:33.468 12:32:06 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:33.468 12:32:06 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:33.468 12:32:06 -- setup/devices.sh@50 -- # local mount_point= 00:04:33.468 12:32:06 -- setup/devices.sh@51 -- # local test_file= 00:04:33.468 12:32:06 -- setup/devices.sh@53 -- # local found=0 00:04:33.468 12:32:06 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:33.468 12:32:06 -- setup/devices.sh@59 -- # local pci status 00:04:33.468 12:32:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.468 12:32:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:33.468 12:32:06 -- setup/devices.sh@47 -- # setup output config 00:04:33.468 12:32:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.468 12:32:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:36.775 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.775 12:32:09 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:36.775 12:32:09 -- setup/devices.sh@63 -- # found=1 00:04:36.775 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.775 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.775 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.775 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.776 12:32:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.776 12:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.349 12:32:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.349 12:32:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.350 12:32:10 -- setup/devices.sh@68 -- # return 0 00:04:37.350 12:32:10 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:37.350 12:32:10 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.350 12:32:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.350 12:32:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.350 12:32:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.350 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.350 00:04:37.350 real 0m14.103s 00:04:37.350 user 0m4.324s 00:04:37.350 sys 0m7.646s 00:04:37.350 12:32:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.350 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:04:37.350 ************************************ 00:04:37.350 END TEST nvme_mount 00:04:37.350 ************************************ 00:04:37.350 12:32:10 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:37.350 12:32:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.350 12:32:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.350 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:04:37.350 ************************************ 00:04:37.350 START TEST dm_mount 00:04:37.350 ************************************ 00:04:37.350 12:32:10 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:37.350 12:32:10 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:37.350 12:32:10 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:37.350 12:32:10 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:37.350 12:32:10 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:37.350 12:32:10 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:37.350 12:32:10 -- setup/common.sh@40 -- # local part_no=2 00:04:37.350 12:32:10 -- setup/common.sh@41 -- # local size=1073741824 00:04:37.350 12:32:10 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:37.350 12:32:10 -- setup/common.sh@44 -- # parts=() 00:04:37.350 12:32:10 -- setup/common.sh@44 -- # local parts 00:04:37.350 12:32:10 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:37.350 12:32:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.350 12:32:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.350 12:32:10 -- setup/common.sh@46 -- # (( part++ )) 00:04:37.350 12:32:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.350 12:32:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.350 12:32:10 -- setup/common.sh@46 -- # (( part++ )) 00:04:37.350 12:32:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.350 12:32:10 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:37.350 12:32:10 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.350 12:32:10 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:38.295 Creating new GPT entries in memory. 00:04:38.295 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.295 other utilities. 00:04:38.295 12:32:11 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.295 12:32:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.295 12:32:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.295 12:32:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.295 12:32:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:39.684 Creating new GPT entries in memory. 00:04:39.684 The operation has completed successfully. 00:04:39.684 12:32:12 -- setup/common.sh@57 -- # (( part++ )) 00:04:39.684 12:32:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.684 12:32:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.684 12:32:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.684 12:32:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:40.631 The operation has completed successfully. 00:04:40.631 12:32:13 -- setup/common.sh@57 -- # (( part++ )) 00:04:40.631 12:32:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.631 12:32:13 -- setup/common.sh@62 -- # wait 295112 00:04:40.631 12:32:13 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:40.631 12:32:13 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:40.631 12:32:13 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.631 12:32:13 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:40.631 12:32:13 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:40.631 12:32:13 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.631 12:32:13 -- setup/devices.sh@161 -- # break 00:04:40.631 12:32:13 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.631 12:32:13 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:40.631 12:32:13 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:40.631 12:32:13 -- setup/devices.sh@166 -- # dm=dm-0 00:04:40.631 12:32:13 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:40.631 12:32:13 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:40.631 12:32:13 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:40.631 12:32:13 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:40.631 12:32:13 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:40.631 12:32:13 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.631 12:32:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:40.631 12:32:13 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:40.631 12:32:13 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.632 12:32:13 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:40.632 12:32:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:40.632 12:32:13 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:40.632 12:32:13 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.632 12:32:13 -- setup/devices.sh@53 -- # local found=0 00:04:40.632 12:32:13 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:40.632 12:32:13 -- setup/devices.sh@56 -- # : 00:04:40.632 12:32:13 -- setup/devices.sh@59 -- # local pci status 00:04:40.632 12:32:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.632 12:32:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:40.632 12:32:13 -- setup/devices.sh@47 -- # setup output config 00:04:40.632 12:32:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.632 12:32:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:43.939 12:32:16 -- setup/devices.sh@63 -- # found=1 00:04:43.939 12:32:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:16 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:17 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:17 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:17 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:17 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:17 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:17 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.939 12:32:17 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.939 12:32:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.513 12:32:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.513 12:32:17 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:44.513 12:32:17 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:44.513 12:32:17 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:44.513 12:32:17 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.513 12:32:17 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:44.513 12:32:17 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:44.513 12:32:17 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:44.513 12:32:17 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:44.513 12:32:17 -- setup/devices.sh@50 -- # local mount_point= 00:04:44.513 12:32:17 -- setup/devices.sh@51 -- # local test_file= 00:04:44.513 12:32:17 -- setup/devices.sh@53 -- # local found=0 00:04:44.513 12:32:17 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.513 12:32:17 -- setup/devices.sh@59 -- # local pci status 00:04:44.513 12:32:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.513 12:32:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:44.513 12:32:17 -- setup/devices.sh@47 -- # setup output config 00:04:44.513 12:32:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.513 12:32:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:47.819 12:32:20 -- setup/devices.sh@63 -- # found=1 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.819 12:32:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.819 12:32:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.393 12:32:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.393 12:32:21 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:48.393 12:32:21 -- setup/devices.sh@68 -- # return 0 00:04:48.393 12:32:21 -- setup/devices.sh@187 -- # cleanup_dm 00:04:48.393 12:32:21 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:48.393 12:32:21 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.393 12:32:21 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:48.393 12:32:21 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.393 12:32:21 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:48.393 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.393 12:32:21 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.393 12:32:21 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:48.393 00:04:48.393 real 0m10.995s 00:04:48.393 user 0m2.964s 00:04:48.393 sys 0m5.090s 00:04:48.393 12:32:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.393 12:32:21 -- common/autotest_common.sh@10 -- # set +x 00:04:48.393 ************************************ 00:04:48.393 END TEST dm_mount 00:04:48.393 ************************************ 00:04:48.393 12:32:21 -- setup/devices.sh@1 -- # cleanup 00:04:48.393 12:32:21 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:48.393 12:32:21 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.393 12:32:21 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.393 12:32:21 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.393 12:32:21 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.393 12:32:21 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.654 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:48.654 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:48.654 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.655 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.655 12:32:21 -- setup/devices.sh@12 -- # cleanup_dm 00:04:48.655 12:32:21 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:48.655 12:32:21 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.655 12:32:21 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.655 12:32:21 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.655 12:32:21 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.655 12:32:21 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:48.655 00:04:48.655 real 0m29.957s 00:04:48.655 user 0m9.088s 00:04:48.655 sys 0m15.684s 00:04:48.655 12:32:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.655 12:32:21 -- common/autotest_common.sh@10 -- # set +x 00:04:48.655 ************************************ 00:04:48.655 END TEST devices 00:04:48.655 ************************************ 00:04:48.655 00:04:48.655 real 1m41.597s 00:04:48.655 user 0m34.505s 00:04:48.655 sys 0m58.348s 00:04:48.655 12:32:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.655 12:32:21 -- common/autotest_common.sh@10 -- # set +x 00:04:48.655 ************************************ 00:04:48.655 END TEST setup.sh 00:04:48.655 ************************************ 00:04:48.655 12:32:21 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:51.963 Hugepages 00:04:51.963 node hugesize free / total 00:04:51.963 node0 1048576kB 0 / 0 00:04:51.963 node0 2048kB 2048 / 2048 00:04:52.224 node1 1048576kB 0 / 0 00:04:52.224 node1 2048kB 0 / 0 00:04:52.224 00:04:52.224 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.224 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:52.224 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:52.224 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:52.224 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:52.224 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:52.224 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:52.224 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:52.224 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:52.224 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:52.224 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:52.224 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:52.224 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:52.224 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:52.224 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:52.224 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:52.224 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:52.224 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:52.224 12:32:25 -- spdk/autotest.sh@128 -- # uname -s 00:04:52.224 12:32:25 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:52.224 12:32:25 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:52.224 12:32:25 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:56.436 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:56.436 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.824 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:58.086 12:32:31 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:59.029 12:32:32 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:59.029 12:32:32 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:59.029 12:32:32 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:59.029 12:32:32 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:59.029 12:32:32 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:59.029 12:32:32 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:59.029 12:32:32 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.029 12:32:32 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:59.029 12:32:32 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:59.290 12:32:32 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:59.290 12:32:32 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:65:00.0 00:04:59.290 12:32:32 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.592 Waiting for block devices as requested 00:05:02.592 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:02.853 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:02.853 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:02.853 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:03.114 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:03.114 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:03.114 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:03.375 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:03.375 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:03.636 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:03.637 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:03.637 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:03.898 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:03.898 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:03.898 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:03.898 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:04.159 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:04.420 12:32:37 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:04.420 12:32:37 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:04.420 12:32:37 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:04.420 12:32:37 -- common/autotest_common.sh@1497 -- # grep 0000:65:00.0/nvme/nvme 00:05:04.420 12:32:37 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:04.420 12:32:37 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:04.420 12:32:37 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:04.420 12:32:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:04.420 12:32:37 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:04.420 12:32:37 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:04.420 12:32:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:04.420 12:32:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.420 12:32:37 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:04.420 12:32:37 -- common/autotest_common.sh@1540 -- # oacs=' 0x5f' 00:05:04.420 12:32:37 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:04.420 12:32:37 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:04.420 12:32:37 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:04.420 12:32:37 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:04.420 12:32:37 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:04.420 12:32:37 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:04.420 12:32:37 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:04.420 12:32:37 -- common/autotest_common.sh@1552 -- # continue 00:05:04.420 12:32:37 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:04.420 12:32:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.420 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:05:04.420 12:32:37 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:04.420 12:32:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.420 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:05:04.420 12:32:37 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:08.631 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:08.631 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:08.631 12:32:41 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:08.631 12:32:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:08.631 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.631 12:32:41 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:08.631 12:32:41 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:08.631 12:32:41 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:08.631 12:32:41 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:08.631 12:32:41 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:08.631 12:32:41 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:08.631 12:32:41 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:08.631 12:32:41 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:08.631 12:32:41 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.631 12:32:41 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:08.631 12:32:41 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:08.631 12:32:41 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:08.631 12:32:41 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:65:00.0 00:05:08.631 12:32:41 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:08.631 12:32:41 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:08.631 12:32:41 -- common/autotest_common.sh@1575 -- # device=0xa80a 00:05:08.631 12:32:41 -- common/autotest_common.sh@1576 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:08.631 12:32:41 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:08.631 12:32:41 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:08.631 12:32:41 -- common/autotest_common.sh@1588 -- # return 0 00:05:08.631 12:32:41 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:08.631 12:32:41 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:08.631 12:32:41 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:08.631 12:32:41 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:08.631 12:32:41 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:08.631 12:32:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.631 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.631 12:32:41 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:08.631 12:32:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.631 12:32:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.631 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.631 ************************************ 00:05:08.631 START TEST env 00:05:08.631 ************************************ 00:05:08.631 12:32:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:08.893 * Looking for test storage... 00:05:08.893 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:08.893 12:32:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:08.893 12:32:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:08.893 12:32:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:08.893 12:32:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:08.893 12:32:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:08.893 12:32:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:08.893 12:32:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:08.893 12:32:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:08.893 12:32:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:08.893 12:32:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.893 12:32:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:08.893 12:32:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:08.893 12:32:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:08.893 12:32:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:08.893 12:32:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:08.893 12:32:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:08.893 12:32:41 -- scripts/common.sh@344 -- # : 1 00:05:08.893 12:32:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:08.893 12:32:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.893 12:32:41 -- scripts/common.sh@364 -- # decimal 1 00:05:08.893 12:32:41 -- scripts/common.sh@352 -- # local d=1 00:05:08.893 12:32:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.893 12:32:41 -- scripts/common.sh@354 -- # echo 1 00:05:08.893 12:32:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:08.893 12:32:41 -- scripts/common.sh@365 -- # decimal 2 00:05:08.893 12:32:41 -- scripts/common.sh@352 -- # local d=2 00:05:08.894 12:32:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.894 12:32:41 -- scripts/common.sh@354 -- # echo 2 00:05:08.894 12:32:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:08.894 12:32:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:08.894 12:32:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:08.894 12:32:41 -- scripts/common.sh@367 -- # return 0 00:05:08.894 12:32:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.894 12:32:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:08.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.894 --rc genhtml_branch_coverage=1 00:05:08.894 --rc genhtml_function_coverage=1 00:05:08.894 --rc genhtml_legend=1 00:05:08.894 --rc geninfo_all_blocks=1 00:05:08.894 --rc geninfo_unexecuted_blocks=1 00:05:08.894 00:05:08.894 ' 00:05:08.894 12:32:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:08.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.894 --rc genhtml_branch_coverage=1 00:05:08.894 --rc genhtml_function_coverage=1 00:05:08.894 --rc genhtml_legend=1 00:05:08.894 --rc geninfo_all_blocks=1 00:05:08.894 --rc geninfo_unexecuted_blocks=1 00:05:08.894 00:05:08.894 ' 00:05:08.894 12:32:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:08.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.894 --rc genhtml_branch_coverage=1 00:05:08.894 --rc genhtml_function_coverage=1 00:05:08.894 --rc genhtml_legend=1 00:05:08.894 --rc geninfo_all_blocks=1 00:05:08.894 --rc geninfo_unexecuted_blocks=1 00:05:08.894 00:05:08.894 ' 00:05:08.894 12:32:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:08.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.894 --rc genhtml_branch_coverage=1 00:05:08.894 --rc genhtml_function_coverage=1 00:05:08.894 --rc genhtml_legend=1 00:05:08.894 --rc geninfo_all_blocks=1 00:05:08.894 --rc geninfo_unexecuted_blocks=1 00:05:08.894 00:05:08.894 ' 00:05:08.894 12:32:41 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:08.894 12:32:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.894 12:32:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.894 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.894 ************************************ 00:05:08.894 START TEST env_memory 00:05:08.894 ************************************ 00:05:08.894 12:32:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:08.894 00:05:08.894 00:05:08.894 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.894 http://cunit.sourceforge.net/ 00:05:08.894 00:05:08.894 00:05:08.894 Suite: memory 00:05:08.894 Test: alloc and free memory map ...[2024-11-20 12:32:41.945141] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:08.894 passed 00:05:08.894 Test: mem map translation ...[2024-11-20 12:32:41.962985] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:08.894 [2024-11-20 12:32:41.963013] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:08.894 [2024-11-20 12:32:41.963048] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:08.894 [2024-11-20 12:32:41.963063] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:08.894 passed 00:05:09.156 Test: mem map registration ...[2024-11-20 12:32:42.001065] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:09.156 [2024-11-20 12:32:42.001088] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:09.156 passed 00:05:09.156 Test: mem map adjacent registrations ...passed 00:05:09.156 00:05:09.156 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.156 suites 1 1 n/a 0 0 00:05:09.156 tests 4 4 4 0 0 00:05:09.156 asserts 152 152 152 0 n/a 00:05:09.156 00:05:09.156 Elapsed time = 0.126 seconds 00:05:09.156 00:05:09.156 real 0m0.137s 00:05:09.156 user 0m0.125s 00:05:09.156 sys 0m0.010s 00:05:09.156 12:32:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.156 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.156 ************************************ 00:05:09.156 END TEST env_memory 00:05:09.156 ************************************ 00:05:09.156 12:32:42 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:09.156 12:32:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.156 12:32:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.156 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.156 ************************************ 00:05:09.156 START TEST env_vtophys 00:05:09.156 ************************************ 00:05:09.156 12:32:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:09.156 EAL: lib.eal log level changed from notice to debug 00:05:09.156 EAL: Detected lcore 0 as core 0 on socket 0 00:05:09.156 EAL: Detected lcore 1 as core 1 on socket 0 00:05:09.156 EAL: Detected lcore 2 as core 2 on socket 0 00:05:09.156 EAL: Detected lcore 3 as core 3 on socket 0 00:05:09.156 EAL: Detected lcore 4 as core 4 on socket 0 00:05:09.156 EAL: Detected lcore 5 as core 5 on socket 0 00:05:09.156 EAL: Detected lcore 6 as core 6 on socket 0 00:05:09.157 EAL: Detected lcore 7 as core 7 on socket 0 00:05:09.157 EAL: Detected lcore 8 as core 8 on socket 0 00:05:09.157 EAL: Detected lcore 9 as core 9 on socket 0 00:05:09.157 EAL: Detected lcore 10 as core 10 on socket 0 00:05:09.157 EAL: Detected lcore 11 as core 11 on socket 0 00:05:09.157 EAL: Detected lcore 12 as core 12 on socket 0 00:05:09.157 EAL: Detected lcore 13 as core 13 on socket 0 00:05:09.157 EAL: Detected lcore 14 as core 14 on socket 0 00:05:09.157 EAL: Detected lcore 15 as core 15 on socket 0 00:05:09.157 EAL: Detected lcore 16 as core 16 on socket 0 00:05:09.157 EAL: Detected lcore 17 as core 17 on socket 0 00:05:09.157 EAL: Detected lcore 18 as core 18 on socket 0 00:05:09.157 EAL: Detected lcore 19 as core 19 on socket 0 00:05:09.157 EAL: Detected lcore 20 as core 20 on socket 0 00:05:09.157 EAL: Detected lcore 21 as core 21 on socket 0 00:05:09.157 EAL: Detected lcore 22 as core 22 on socket 0 00:05:09.157 EAL: Detected lcore 23 as core 23 on socket 0 00:05:09.157 EAL: Detected lcore 24 as core 24 on socket 0 00:05:09.157 EAL: Detected lcore 25 as core 25 on socket 0 00:05:09.157 EAL: Detected lcore 26 as core 26 on socket 0 00:05:09.157 EAL: Detected lcore 27 as core 27 on socket 0 00:05:09.157 EAL: Detected lcore 28 as core 28 on socket 0 00:05:09.157 EAL: Detected lcore 29 as core 29 on socket 0 00:05:09.157 EAL: Detected lcore 30 as core 30 on socket 0 00:05:09.157 EAL: Detected lcore 31 as core 31 on socket 0 00:05:09.157 EAL: Detected lcore 32 as core 32 on socket 0 00:05:09.157 EAL: Detected lcore 33 as core 33 on socket 0 00:05:09.157 EAL: Detected lcore 34 as core 34 on socket 0 00:05:09.157 EAL: Detected lcore 35 as core 35 on socket 0 00:05:09.157 EAL: Detected lcore 36 as core 0 on socket 1 00:05:09.157 EAL: Detected lcore 37 as core 1 on socket 1 00:05:09.157 EAL: Detected lcore 38 as core 2 on socket 1 00:05:09.157 EAL: Detected lcore 39 as core 3 on socket 1 00:05:09.157 EAL: Detected lcore 40 as core 4 on socket 1 00:05:09.157 EAL: Detected lcore 41 as core 5 on socket 1 00:05:09.157 EAL: Detected lcore 42 as core 6 on socket 1 00:05:09.157 EAL: Detected lcore 43 as core 7 on socket 1 00:05:09.157 EAL: Detected lcore 44 as core 8 on socket 1 00:05:09.157 EAL: Detected lcore 45 as core 9 on socket 1 00:05:09.157 EAL: Detected lcore 46 as core 10 on socket 1 00:05:09.157 EAL: Detected lcore 47 as core 11 on socket 1 00:05:09.157 EAL: Detected lcore 48 as core 12 on socket 1 00:05:09.157 EAL: Detected lcore 49 as core 13 on socket 1 00:05:09.157 EAL: Detected lcore 50 as core 14 on socket 1 00:05:09.157 EAL: Detected lcore 51 as core 15 on socket 1 00:05:09.157 EAL: Detected lcore 52 as core 16 on socket 1 00:05:09.157 EAL: Detected lcore 53 as core 17 on socket 1 00:05:09.157 EAL: Detected lcore 54 as core 18 on socket 1 00:05:09.157 EAL: Detected lcore 55 as core 19 on socket 1 00:05:09.157 EAL: Detected lcore 56 as core 20 on socket 1 00:05:09.157 EAL: Detected lcore 57 as core 21 on socket 1 00:05:09.157 EAL: Detected lcore 58 as core 22 on socket 1 00:05:09.157 EAL: Detected lcore 59 as core 23 on socket 1 00:05:09.157 EAL: Detected lcore 60 as core 24 on socket 1 00:05:09.157 EAL: Detected lcore 61 as core 25 on socket 1 00:05:09.157 EAL: Detected lcore 62 as core 26 on socket 1 00:05:09.157 EAL: Detected lcore 63 as core 27 on socket 1 00:05:09.157 EAL: Detected lcore 64 as core 28 on socket 1 00:05:09.157 EAL: Detected lcore 65 as core 29 on socket 1 00:05:09.157 EAL: Detected lcore 66 as core 30 on socket 1 00:05:09.157 EAL: Detected lcore 67 as core 31 on socket 1 00:05:09.157 EAL: Detected lcore 68 as core 32 on socket 1 00:05:09.157 EAL: Detected lcore 69 as core 33 on socket 1 00:05:09.157 EAL: Detected lcore 70 as core 34 on socket 1 00:05:09.157 EAL: Detected lcore 71 as core 35 on socket 1 00:05:09.157 EAL: Detected lcore 72 as core 0 on socket 0 00:05:09.157 EAL: Detected lcore 73 as core 1 on socket 0 00:05:09.157 EAL: Detected lcore 74 as core 2 on socket 0 00:05:09.157 EAL: Detected lcore 75 as core 3 on socket 0 00:05:09.157 EAL: Detected lcore 76 as core 4 on socket 0 00:05:09.157 EAL: Detected lcore 77 as core 5 on socket 0 00:05:09.157 EAL: Detected lcore 78 as core 6 on socket 0 00:05:09.157 EAL: Detected lcore 79 as core 7 on socket 0 00:05:09.157 EAL: Detected lcore 80 as core 8 on socket 0 00:05:09.157 EAL: Detected lcore 81 as core 9 on socket 0 00:05:09.157 EAL: Detected lcore 82 as core 10 on socket 0 00:05:09.157 EAL: Detected lcore 83 as core 11 on socket 0 00:05:09.157 EAL: Detected lcore 84 as core 12 on socket 0 00:05:09.157 EAL: Detected lcore 85 as core 13 on socket 0 00:05:09.157 EAL: Detected lcore 86 as core 14 on socket 0 00:05:09.157 EAL: Detected lcore 87 as core 15 on socket 0 00:05:09.157 EAL: Detected lcore 88 as core 16 on socket 0 00:05:09.157 EAL: Detected lcore 89 as core 17 on socket 0 00:05:09.157 EAL: Detected lcore 90 as core 18 on socket 0 00:05:09.157 EAL: Detected lcore 91 as core 19 on socket 0 00:05:09.157 EAL: Detected lcore 92 as core 20 on socket 0 00:05:09.157 EAL: Detected lcore 93 as core 21 on socket 0 00:05:09.157 EAL: Detected lcore 94 as core 22 on socket 0 00:05:09.157 EAL: Detected lcore 95 as core 23 on socket 0 00:05:09.157 EAL: Detected lcore 96 as core 24 on socket 0 00:05:09.157 EAL: Detected lcore 97 as core 25 on socket 0 00:05:09.157 EAL: Detected lcore 98 as core 26 on socket 0 00:05:09.157 EAL: Detected lcore 99 as core 27 on socket 0 00:05:09.157 EAL: Detected lcore 100 as core 28 on socket 0 00:05:09.157 EAL: Detected lcore 101 as core 29 on socket 0 00:05:09.157 EAL: Detected lcore 102 as core 30 on socket 0 00:05:09.157 EAL: Detected lcore 103 as core 31 on socket 0 00:05:09.157 EAL: Detected lcore 104 as core 32 on socket 0 00:05:09.157 EAL: Detected lcore 105 as core 33 on socket 0 00:05:09.157 EAL: Detected lcore 106 as core 34 on socket 0 00:05:09.157 EAL: Detected lcore 107 as core 35 on socket 0 00:05:09.157 EAL: Detected lcore 108 as core 0 on socket 1 00:05:09.157 EAL: Detected lcore 109 as core 1 on socket 1 00:05:09.157 EAL: Detected lcore 110 as core 2 on socket 1 00:05:09.157 EAL: Detected lcore 111 as core 3 on socket 1 00:05:09.157 EAL: Detected lcore 112 as core 4 on socket 1 00:05:09.157 EAL: Detected lcore 113 as core 5 on socket 1 00:05:09.157 EAL: Detected lcore 114 as core 6 on socket 1 00:05:09.157 EAL: Detected lcore 115 as core 7 on socket 1 00:05:09.157 EAL: Detected lcore 116 as core 8 on socket 1 00:05:09.157 EAL: Detected lcore 117 as core 9 on socket 1 00:05:09.157 EAL: Detected lcore 118 as core 10 on socket 1 00:05:09.157 EAL: Detected lcore 119 as core 11 on socket 1 00:05:09.157 EAL: Detected lcore 120 as core 12 on socket 1 00:05:09.158 EAL: Detected lcore 121 as core 13 on socket 1 00:05:09.158 EAL: Detected lcore 122 as core 14 on socket 1 00:05:09.158 EAL: Detected lcore 123 as core 15 on socket 1 00:05:09.158 EAL: Detected lcore 124 as core 16 on socket 1 00:05:09.158 EAL: Detected lcore 125 as core 17 on socket 1 00:05:09.158 EAL: Detected lcore 126 as core 18 on socket 1 00:05:09.158 EAL: Detected lcore 127 as core 19 on socket 1 00:05:09.158 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:09.158 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:09.158 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:09.158 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:09.158 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:09.158 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:09.158 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:09.158 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:09.158 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:09.158 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:09.158 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:09.158 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:09.158 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:09.158 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:09.158 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:09.158 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:09.158 EAL: Maximum logical cores by configuration: 128 00:05:09.158 EAL: Detected CPU lcores: 128 00:05:09.158 EAL: Detected NUMA nodes: 2 00:05:09.158 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:09.158 EAL: Detected shared linkage of DPDK 00:05:09.158 EAL: No shared files mode enabled, IPC will be disabled 00:05:09.158 EAL: Bus pci wants IOVA as 'DC' 00:05:09.158 EAL: Buses did not request a specific IOVA mode. 00:05:09.158 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:09.158 EAL: Selected IOVA mode 'VA' 00:05:09.158 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.158 EAL: Probing VFIO support... 00:05:09.158 EAL: IOMMU type 1 (Type 1) is supported 00:05:09.158 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:09.158 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:09.158 EAL: VFIO support initialized 00:05:09.158 EAL: Ask a virtual area of 0x2e000 bytes 00:05:09.158 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:09.158 EAL: Setting up physically contiguous memory... 00:05:09.158 EAL: Setting maximum number of open files to 524288 00:05:09.158 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:09.158 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:09.158 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:09.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.158 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:09.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.158 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:09.158 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:09.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.158 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:09.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.158 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:09.158 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:09.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.158 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:09.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.158 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:09.158 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:09.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.158 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:09.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.158 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:09.158 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:09.158 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:09.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.158 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:09.158 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.158 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:09.158 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:09.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.158 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:09.158 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.158 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:09.158 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:09.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.158 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:09.158 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.158 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:09.158 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:09.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.158 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:09.158 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.158 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:09.158 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:09.158 EAL: Hugepages will be freed exactly as allocated. 00:05:09.158 EAL: No shared files mode enabled, IPC is disabled 00:05:09.158 EAL: No shared files mode enabled, IPC is disabled 00:05:09.158 EAL: TSC frequency is ~2400000 KHz 00:05:09.158 EAL: Main lcore 0 is ready (tid=7fa7444c8a00;cpuset=[0]) 00:05:09.158 EAL: Trying to obtain current memory policy. 00:05:09.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.158 EAL: Restoring previous memory policy: 0 00:05:09.158 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was expanded by 2MB 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:09.159 EAL: Mem event callback 'spdk:(nil)' registered 00:05:09.159 00:05:09.159 00:05:09.159 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.159 http://cunit.sourceforge.net/ 00:05:09.159 00:05:09.159 00:05:09.159 Suite: components_suite 00:05:09.159 Test: vtophys_malloc_test ...passed 00:05:09.159 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:09.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.159 EAL: Restoring previous memory policy: 4 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was expanded by 4MB 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was shrunk by 4MB 00:05:09.159 EAL: Trying to obtain current memory policy. 00:05:09.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.159 EAL: Restoring previous memory policy: 4 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was expanded by 6MB 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was shrunk by 6MB 00:05:09.159 EAL: Trying to obtain current memory policy. 00:05:09.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.159 EAL: Restoring previous memory policy: 4 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was expanded by 10MB 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was shrunk by 10MB 00:05:09.159 EAL: Trying to obtain current memory policy. 00:05:09.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.159 EAL: Restoring previous memory policy: 4 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was expanded by 18MB 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was shrunk by 18MB 00:05:09.159 EAL: Trying to obtain current memory policy. 00:05:09.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.159 EAL: Restoring previous memory policy: 4 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was expanded by 34MB 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was shrunk by 34MB 00:05:09.159 EAL: Trying to obtain current memory policy. 00:05:09.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.159 EAL: Restoring previous memory policy: 4 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was expanded by 66MB 00:05:09.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.159 EAL: request: mp_malloc_sync 00:05:09.159 EAL: No shared files mode enabled, IPC is disabled 00:05:09.159 EAL: Heap on socket 0 was shrunk by 66MB 00:05:09.159 EAL: Trying to obtain current memory policy. 00:05:09.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.420 EAL: Restoring previous memory policy: 4 00:05:09.420 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.420 EAL: request: mp_malloc_sync 00:05:09.420 EAL: No shared files mode enabled, IPC is disabled 00:05:09.420 EAL: Heap on socket 0 was expanded by 130MB 00:05:09.420 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.420 EAL: request: mp_malloc_sync 00:05:09.420 EAL: No shared files mode enabled, IPC is disabled 00:05:09.420 EAL: Heap on socket 0 was shrunk by 130MB 00:05:09.420 EAL: Trying to obtain current memory policy. 00:05:09.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.420 EAL: Restoring previous memory policy: 4 00:05:09.420 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.420 EAL: request: mp_malloc_sync 00:05:09.420 EAL: No shared files mode enabled, IPC is disabled 00:05:09.420 EAL: Heap on socket 0 was expanded by 258MB 00:05:09.420 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.420 EAL: request: mp_malloc_sync 00:05:09.420 EAL: No shared files mode enabled, IPC is disabled 00:05:09.420 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.420 EAL: Trying to obtain current memory policy. 00:05:09.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.420 EAL: Restoring previous memory policy: 4 00:05:09.420 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.420 EAL: request: mp_malloc_sync 00:05:09.420 EAL: No shared files mode enabled, IPC is disabled 00:05:09.420 EAL: Heap on socket 0 was expanded by 514MB 00:05:09.420 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.681 EAL: request: mp_malloc_sync 00:05:09.681 EAL: No shared files mode enabled, IPC is disabled 00:05:09.681 EAL: Heap on socket 0 was shrunk by 514MB 00:05:09.681 EAL: Trying to obtain current memory policy. 00:05:09.681 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.681 EAL: Restoring previous memory policy: 4 00:05:09.681 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.681 EAL: request: mp_malloc_sync 00:05:09.681 EAL: No shared files mode enabled, IPC is disabled 00:05:09.681 EAL: Heap on socket 0 was expanded by 1026MB 00:05:09.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.944 EAL: request: mp_malloc_sync 00:05:09.944 EAL: No shared files mode enabled, IPC is disabled 00:05:09.944 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:09.944 passed 00:05:09.944 00:05:09.944 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.944 suites 1 1 n/a 0 0 00:05:09.944 tests 2 2 2 0 0 00:05:09.944 asserts 497 497 497 0 n/a 00:05:09.944 00:05:09.944 Elapsed time = 0.704 seconds 00:05:09.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.944 EAL: request: mp_malloc_sync 00:05:09.944 EAL: No shared files mode enabled, IPC is disabled 00:05:09.944 EAL: Heap on socket 0 was shrunk by 2MB 00:05:09.944 EAL: No shared files mode enabled, IPC is disabled 00:05:09.944 EAL: No shared files mode enabled, IPC is disabled 00:05:09.944 EAL: No shared files mode enabled, IPC is disabled 00:05:09.944 00:05:09.944 real 0m0.838s 00:05:09.944 user 0m0.436s 00:05:09.944 sys 0m0.378s 00:05:09.944 12:32:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.944 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 ************************************ 00:05:09.944 END TEST env_vtophys 00:05:09.944 ************************************ 00:05:09.944 12:32:42 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.944 12:32:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.944 12:32:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.944 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 ************************************ 00:05:09.944 START TEST env_pci 00:05:09.944 ************************************ 00:05:09.944 12:32:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.944 00:05:09.944 00:05:09.944 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.944 http://cunit.sourceforge.net/ 00:05:09.944 00:05:09.944 00:05:09.944 Suite: pci 00:05:09.944 Test: pci_hook ...[2024-11-20 12:32:42.998420] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 306725 has claimed it 00:05:09.944 EAL: Cannot find device (10000:00:01.0) 00:05:09.944 EAL: Failed to attach device on primary process 00:05:09.944 passed 00:05:09.944 00:05:09.944 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.944 suites 1 1 n/a 0 0 00:05:09.944 tests 1 1 1 0 0 00:05:09.944 asserts 25 25 25 0 n/a 00:05:09.944 00:05:09.944 Elapsed time = 0.031 seconds 00:05:09.944 00:05:09.944 real 0m0.053s 00:05:09.944 user 0m0.018s 00:05:09.944 sys 0m0.035s 00:05:09.944 12:32:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.944 12:32:43 -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 ************************************ 00:05:09.944 END TEST env_pci 00:05:09.944 ************************************ 00:05:10.207 12:32:43 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:10.207 12:32:43 -- env/env.sh@15 -- # uname 00:05:10.207 12:32:43 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:10.207 12:32:43 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:10.207 12:32:43 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.207 12:32:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:10.207 12:32:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.207 12:32:43 -- common/autotest_common.sh@10 -- # set +x 00:05:10.207 ************************************ 00:05:10.207 START TEST env_dpdk_post_init 00:05:10.207 ************************************ 00:05:10.207 12:32:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.207 EAL: Detected CPU lcores: 128 00:05:10.207 EAL: Detected NUMA nodes: 2 00:05:10.207 EAL: Detected shared linkage of DPDK 00:05:10.207 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.207 EAL: Selected IOVA mode 'VA' 00:05:10.207 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.207 EAL: VFIO support initialized 00:05:10.207 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.207 EAL: Using IOMMU type 1 (Type 1) 00:05:10.471 EAL: Ignore mapping IO port bar(1) 00:05:10.471 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:10.471 EAL: Ignore mapping IO port bar(1) 00:05:10.733 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:10.733 EAL: Ignore mapping IO port bar(1) 00:05:10.995 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:10.995 EAL: Ignore mapping IO port bar(1) 00:05:11.257 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:11.257 EAL: Ignore mapping IO port bar(1) 00:05:11.257 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:11.519 EAL: Ignore mapping IO port bar(1) 00:05:11.519 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:11.781 EAL: Ignore mapping IO port bar(1) 00:05:11.781 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:12.042 EAL: Ignore mapping IO port bar(1) 00:05:12.042 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:12.303 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:12.303 EAL: Ignore mapping IO port bar(1) 00:05:12.564 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:12.564 EAL: Ignore mapping IO port bar(1) 00:05:12.851 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:12.851 EAL: Ignore mapping IO port bar(1) 00:05:12.851 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:13.115 EAL: Ignore mapping IO port bar(1) 00:05:13.115 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:13.439 EAL: Ignore mapping IO port bar(1) 00:05:13.439 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:13.439 EAL: Ignore mapping IO port bar(1) 00:05:13.720 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:13.720 EAL: Ignore mapping IO port bar(1) 00:05:13.720 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:14.005 EAL: Ignore mapping IO port bar(1) 00:05:14.005 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:14.005 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:14.005 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:14.300 Starting DPDK initialization... 00:05:14.300 Starting SPDK post initialization... 00:05:14.300 SPDK NVMe probe 00:05:14.300 Attaching to 0000:65:00.0 00:05:14.300 Attached to 0000:65:00.0 00:05:14.301 Cleaning up... 00:05:16.254 00:05:16.254 real 0m5.742s 00:05:16.254 user 0m0.195s 00:05:16.254 sys 0m0.098s 00:05:16.254 12:32:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.254 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:05:16.254 ************************************ 00:05:16.254 END TEST env_dpdk_post_init 00:05:16.254 ************************************ 00:05:16.255 12:32:48 -- env/env.sh@26 -- # uname 00:05:16.255 12:32:48 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:16.255 12:32:48 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.255 12:32:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.255 12:32:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.255 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:05:16.255 ************************************ 00:05:16.255 START TEST env_mem_callbacks 00:05:16.255 ************************************ 00:05:16.255 12:32:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.255 EAL: Detected CPU lcores: 128 00:05:16.255 EAL: Detected NUMA nodes: 2 00:05:16.255 EAL: Detected shared linkage of DPDK 00:05:16.255 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.255 EAL: Selected IOVA mode 'VA' 00:05:16.255 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.255 EAL: VFIO support initialized 00:05:16.255 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.255 00:05:16.255 00:05:16.255 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.255 http://cunit.sourceforge.net/ 00:05:16.255 00:05:16.255 00:05:16.255 Suite: memory 00:05:16.255 Test: test ... 00:05:16.255 register 0x200000200000 2097152 00:05:16.255 malloc 3145728 00:05:16.255 register 0x200000400000 4194304 00:05:16.255 buf 0x200000500000 len 3145728 PASSED 00:05:16.255 malloc 64 00:05:16.255 buf 0x2000004fff40 len 64 PASSED 00:05:16.255 malloc 4194304 00:05:16.255 register 0x200000800000 6291456 00:05:16.255 buf 0x200000a00000 len 4194304 PASSED 00:05:16.255 free 0x200000500000 3145728 00:05:16.255 free 0x2000004fff40 64 00:05:16.255 unregister 0x200000400000 4194304 PASSED 00:05:16.255 free 0x200000a00000 4194304 00:05:16.255 unregister 0x200000800000 6291456 PASSED 00:05:16.255 malloc 8388608 00:05:16.255 register 0x200000400000 10485760 00:05:16.255 buf 0x200000600000 len 8388608 PASSED 00:05:16.255 free 0x200000600000 8388608 00:05:16.255 unregister 0x200000400000 10485760 PASSED 00:05:16.255 passed 00:05:16.255 00:05:16.255 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.255 suites 1 1 n/a 0 0 00:05:16.255 tests 1 1 1 0 0 00:05:16.255 asserts 15 15 15 0 n/a 00:05:16.255 00:05:16.255 Elapsed time = 0.010 seconds 00:05:16.255 00:05:16.255 real 0m0.069s 00:05:16.255 user 0m0.025s 00:05:16.255 sys 0m0.043s 00:05:16.255 12:32:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.255 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:05:16.255 ************************************ 00:05:16.255 END TEST env_mem_callbacks 00:05:16.255 ************************************ 00:05:16.255 00:05:16.255 real 0m7.278s 00:05:16.255 user 0m0.978s 00:05:16.255 sys 0m0.874s 00:05:16.255 12:32:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.255 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:05:16.255 ************************************ 00:05:16.255 END TEST env 00:05:16.255 ************************************ 00:05:16.255 12:32:49 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.255 12:32:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.255 12:32:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.255 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.255 ************************************ 00:05:16.255 START TEST rpc 00:05:16.255 ************************************ 00:05:16.255 12:32:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.255 * Looking for test storage... 00:05:16.255 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:16.255 12:32:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:16.255 12:32:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:16.255 12:32:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:16.255 12:32:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:16.255 12:32:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:16.255 12:32:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:16.255 12:32:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:16.255 12:32:49 -- scripts/common.sh@335 -- # IFS=.-: 00:05:16.255 12:32:49 -- scripts/common.sh@335 -- # read -ra ver1 00:05:16.255 12:32:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.255 12:32:49 -- scripts/common.sh@336 -- # read -ra ver2 00:05:16.255 12:32:49 -- scripts/common.sh@337 -- # local 'op=<' 00:05:16.255 12:32:49 -- scripts/common.sh@339 -- # ver1_l=2 00:05:16.255 12:32:49 -- scripts/common.sh@340 -- # ver2_l=1 00:05:16.255 12:32:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:16.255 12:32:49 -- scripts/common.sh@343 -- # case "$op" in 00:05:16.255 12:32:49 -- scripts/common.sh@344 -- # : 1 00:05:16.255 12:32:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:16.255 12:32:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.255 12:32:49 -- scripts/common.sh@364 -- # decimal 1 00:05:16.255 12:32:49 -- scripts/common.sh@352 -- # local d=1 00:05:16.255 12:32:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.255 12:32:49 -- scripts/common.sh@354 -- # echo 1 00:05:16.255 12:32:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:16.255 12:32:49 -- scripts/common.sh@365 -- # decimal 2 00:05:16.255 12:32:49 -- scripts/common.sh@352 -- # local d=2 00:05:16.255 12:32:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.255 12:32:49 -- scripts/common.sh@354 -- # echo 2 00:05:16.255 12:32:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:16.255 12:32:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:16.255 12:32:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:16.255 12:32:49 -- scripts/common.sh@367 -- # return 0 00:05:16.255 12:32:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.255 12:32:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:16.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.255 --rc genhtml_branch_coverage=1 00:05:16.255 --rc genhtml_function_coverage=1 00:05:16.255 --rc genhtml_legend=1 00:05:16.255 --rc geninfo_all_blocks=1 00:05:16.255 --rc geninfo_unexecuted_blocks=1 00:05:16.255 00:05:16.255 ' 00:05:16.255 12:32:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:16.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.255 --rc genhtml_branch_coverage=1 00:05:16.255 --rc genhtml_function_coverage=1 00:05:16.255 --rc genhtml_legend=1 00:05:16.255 --rc geninfo_all_blocks=1 00:05:16.255 --rc geninfo_unexecuted_blocks=1 00:05:16.255 00:05:16.255 ' 00:05:16.255 12:32:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:16.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.255 --rc genhtml_branch_coverage=1 00:05:16.255 --rc genhtml_function_coverage=1 00:05:16.255 --rc genhtml_legend=1 00:05:16.255 --rc geninfo_all_blocks=1 00:05:16.255 --rc geninfo_unexecuted_blocks=1 00:05:16.255 00:05:16.255 ' 00:05:16.255 12:32:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:16.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.255 --rc genhtml_branch_coverage=1 00:05:16.255 --rc genhtml_function_coverage=1 00:05:16.255 --rc genhtml_legend=1 00:05:16.256 --rc geninfo_all_blocks=1 00:05:16.256 --rc geninfo_unexecuted_blocks=1 00:05:16.256 00:05:16.256 ' 00:05:16.256 12:32:49 -- rpc/rpc.sh@65 -- # spdk_pid=308187 00:05:16.256 12:32:49 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.256 12:32:49 -- rpc/rpc.sh@67 -- # waitforlisten 308187 00:05:16.256 12:32:49 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:16.256 12:32:49 -- common/autotest_common.sh@829 -- # '[' -z 308187 ']' 00:05:16.256 12:32:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.256 12:32:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.256 12:32:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.256 12:32:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.256 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.256 [2024-11-20 12:32:49.298166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.256 [2024-11-20 12:32:49.298242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308187 ] 00:05:16.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.543 [2024-11-20 12:32:49.379881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.544 [2024-11-20 12:32:49.471208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.544 [2024-11-20 12:32:49.471373] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:16.544 [2024-11-20 12:32:49.471384] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 308187' to capture a snapshot of events at runtime. 00:05:16.544 [2024-11-20 12:32:49.471391] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid308187 for offline analysis/debug. 00:05:16.544 [2024-11-20 12:32:49.471431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.143 12:32:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.143 12:32:50 -- common/autotest_common.sh@862 -- # return 0 00:05:17.143 12:32:50 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:17.143 12:32:50 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:17.143 12:32:50 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:17.143 12:32:50 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:17.143 12:32:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.143 12:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.143 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.143 ************************************ 00:05:17.143 START TEST rpc_integrity 00:05:17.143 ************************************ 00:05:17.143 12:32:50 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:17.143 12:32:50 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.143 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.143 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.143 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.143 12:32:50 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.143 12:32:50 -- rpc/rpc.sh@13 -- # jq length 00:05:17.143 12:32:50 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.143 12:32:50 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.143 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.143 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.143 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.143 12:32:50 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:17.143 12:32:50 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.143 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.144 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.144 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.144 12:32:50 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.144 { 00:05:17.144 "name": "Malloc0", 00:05:17.144 "aliases": [ 00:05:17.144 "ea84fbb9-53bf-490d-86b2-7fc9f2b3ee57" 00:05:17.144 ], 00:05:17.144 "product_name": "Malloc disk", 00:05:17.144 "block_size": 512, 00:05:17.144 "num_blocks": 16384, 00:05:17.144 "uuid": "ea84fbb9-53bf-490d-86b2-7fc9f2b3ee57", 00:05:17.144 "assigned_rate_limits": { 00:05:17.144 "rw_ios_per_sec": 0, 00:05:17.144 "rw_mbytes_per_sec": 0, 00:05:17.144 "r_mbytes_per_sec": 0, 00:05:17.144 "w_mbytes_per_sec": 0 00:05:17.144 }, 00:05:17.144 "claimed": false, 00:05:17.144 "zoned": false, 00:05:17.144 "supported_io_types": { 00:05:17.144 "read": true, 00:05:17.144 "write": true, 00:05:17.144 "unmap": true, 00:05:17.144 "write_zeroes": true, 00:05:17.144 "flush": true, 00:05:17.144 "reset": true, 00:05:17.144 "compare": false, 00:05:17.144 "compare_and_write": false, 00:05:17.144 "abort": true, 00:05:17.144 "nvme_admin": false, 00:05:17.144 "nvme_io": false 00:05:17.144 }, 00:05:17.144 "memory_domains": [ 00:05:17.144 { 00:05:17.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.144 "dma_device_type": 2 00:05:17.144 } 00:05:17.144 ], 00:05:17.144 "driver_specific": {} 00:05:17.144 } 00:05:17.144 ]' 00:05:17.144 12:32:50 -- rpc/rpc.sh@17 -- # jq length 00:05:17.144 12:32:50 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.144 12:32:50 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:17.144 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.144 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.413 [2024-11-20 12:32:50.250128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:17.413 [2024-11-20 12:32:50.250181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.413 [2024-11-20 12:32:50.250197] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd8b870 00:05:17.413 [2024-11-20 12:32:50.250205] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.413 [2024-11-20 12:32:50.251719] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.413 [2024-11-20 12:32:50.251755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.413 Passthru0 00:05:17.413 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.413 12:32:50 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.413 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.413 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.413 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.413 12:32:50 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.413 { 00:05:17.413 "name": "Malloc0", 00:05:17.413 "aliases": [ 00:05:17.413 "ea84fbb9-53bf-490d-86b2-7fc9f2b3ee57" 00:05:17.413 ], 00:05:17.413 "product_name": "Malloc disk", 00:05:17.413 "block_size": 512, 00:05:17.413 "num_blocks": 16384, 00:05:17.413 "uuid": "ea84fbb9-53bf-490d-86b2-7fc9f2b3ee57", 00:05:17.413 "assigned_rate_limits": { 00:05:17.413 "rw_ios_per_sec": 0, 00:05:17.413 "rw_mbytes_per_sec": 0, 00:05:17.413 "r_mbytes_per_sec": 0, 00:05:17.413 "w_mbytes_per_sec": 0 00:05:17.413 }, 00:05:17.413 "claimed": true, 00:05:17.413 "claim_type": "exclusive_write", 00:05:17.413 "zoned": false, 00:05:17.413 "supported_io_types": { 00:05:17.413 "read": true, 00:05:17.413 "write": true, 00:05:17.413 "unmap": true, 00:05:17.413 "write_zeroes": true, 00:05:17.413 "flush": true, 00:05:17.413 "reset": true, 00:05:17.413 "compare": false, 00:05:17.413 "compare_and_write": false, 00:05:17.413 "abort": true, 00:05:17.413 "nvme_admin": false, 00:05:17.413 "nvme_io": false 00:05:17.413 }, 00:05:17.413 "memory_domains": [ 00:05:17.413 { 00:05:17.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.413 "dma_device_type": 2 00:05:17.413 } 00:05:17.413 ], 00:05:17.413 "driver_specific": {} 00:05:17.413 }, 00:05:17.413 { 00:05:17.413 "name": "Passthru0", 00:05:17.413 "aliases": [ 00:05:17.413 "3146d758-a58c-52b4-8be4-723a2fa4d8c0" 00:05:17.413 ], 00:05:17.413 "product_name": "passthru", 00:05:17.413 "block_size": 512, 00:05:17.413 "num_blocks": 16384, 00:05:17.413 "uuid": "3146d758-a58c-52b4-8be4-723a2fa4d8c0", 00:05:17.413 "assigned_rate_limits": { 00:05:17.413 "rw_ios_per_sec": 0, 00:05:17.413 "rw_mbytes_per_sec": 0, 00:05:17.413 "r_mbytes_per_sec": 0, 00:05:17.413 "w_mbytes_per_sec": 0 00:05:17.413 }, 00:05:17.413 "claimed": false, 00:05:17.413 "zoned": false, 00:05:17.413 "supported_io_types": { 00:05:17.413 "read": true, 00:05:17.413 "write": true, 00:05:17.413 "unmap": true, 00:05:17.413 "write_zeroes": true, 00:05:17.413 "flush": true, 00:05:17.413 "reset": true, 00:05:17.413 "compare": false, 00:05:17.413 "compare_and_write": false, 00:05:17.413 "abort": true, 00:05:17.413 "nvme_admin": false, 00:05:17.413 "nvme_io": false 00:05:17.413 }, 00:05:17.413 "memory_domains": [ 00:05:17.413 { 00:05:17.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.413 "dma_device_type": 2 00:05:17.413 } 00:05:17.413 ], 00:05:17.413 "driver_specific": { 00:05:17.413 "passthru": { 00:05:17.413 "name": "Passthru0", 00:05:17.413 "base_bdev_name": "Malloc0" 00:05:17.413 } 00:05:17.413 } 00:05:17.413 } 00:05:17.413 ]' 00:05:17.413 12:32:50 -- rpc/rpc.sh@21 -- # jq length 00:05:17.413 12:32:50 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.413 12:32:50 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.413 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.413 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.413 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.413 12:32:50 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:17.413 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.413 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.414 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.414 12:32:50 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.414 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.414 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.414 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.414 12:32:50 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.414 12:32:50 -- rpc/rpc.sh@26 -- # jq length 00:05:17.414 12:32:50 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.414 00:05:17.414 real 0m0.289s 00:05:17.414 user 0m0.176s 00:05:17.414 sys 0m0.047s 00:05:17.414 12:32:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.414 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.414 ************************************ 00:05:17.414 END TEST rpc_integrity 00:05:17.414 ************************************ 00:05:17.414 12:32:50 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:17.414 12:32:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.414 12:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.414 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.414 ************************************ 00:05:17.414 START TEST rpc_plugins 00:05:17.414 ************************************ 00:05:17.414 12:32:50 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:17.414 12:32:50 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:17.414 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.414 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.414 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.414 12:32:50 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:17.414 12:32:50 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:17.414 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.414 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.414 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.414 12:32:50 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:17.414 { 00:05:17.414 "name": "Malloc1", 00:05:17.414 "aliases": [ 00:05:17.414 "3f083073-289b-4de5-bb3e-8baab41328ab" 00:05:17.414 ], 00:05:17.414 "product_name": "Malloc disk", 00:05:17.414 "block_size": 4096, 00:05:17.414 "num_blocks": 256, 00:05:17.414 "uuid": "3f083073-289b-4de5-bb3e-8baab41328ab", 00:05:17.414 "assigned_rate_limits": { 00:05:17.414 "rw_ios_per_sec": 0, 00:05:17.414 "rw_mbytes_per_sec": 0, 00:05:17.414 "r_mbytes_per_sec": 0, 00:05:17.414 "w_mbytes_per_sec": 0 00:05:17.414 }, 00:05:17.414 "claimed": false, 00:05:17.414 "zoned": false, 00:05:17.414 "supported_io_types": { 00:05:17.414 "read": true, 00:05:17.414 "write": true, 00:05:17.414 "unmap": true, 00:05:17.414 "write_zeroes": true, 00:05:17.414 "flush": true, 00:05:17.414 "reset": true, 00:05:17.414 "compare": false, 00:05:17.414 "compare_and_write": false, 00:05:17.414 "abort": true, 00:05:17.414 "nvme_admin": false, 00:05:17.414 "nvme_io": false 00:05:17.414 }, 00:05:17.414 "memory_domains": [ 00:05:17.414 { 00:05:17.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.414 "dma_device_type": 2 00:05:17.414 } 00:05:17.414 ], 00:05:17.414 "driver_specific": {} 00:05:17.414 } 00:05:17.414 ]' 00:05:17.414 12:32:50 -- rpc/rpc.sh@32 -- # jq length 00:05:17.687 12:32:50 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:17.687 12:32:50 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:17.687 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.687 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.687 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.687 12:32:50 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:17.687 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.687 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.688 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.688 12:32:50 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:17.688 12:32:50 -- rpc/rpc.sh@36 -- # jq length 00:05:17.688 12:32:50 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:17.688 00:05:17.688 real 0m0.141s 00:05:17.688 user 0m0.086s 00:05:17.688 sys 0m0.020s 00:05:17.688 12:32:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.688 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.688 ************************************ 00:05:17.688 END TEST rpc_plugins 00:05:17.688 ************************************ 00:05:17.688 12:32:50 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:17.688 12:32:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.688 12:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.688 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.688 ************************************ 00:05:17.688 START TEST rpc_trace_cmd_test 00:05:17.688 ************************************ 00:05:17.688 12:32:50 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:17.688 12:32:50 -- rpc/rpc.sh@40 -- # local info 00:05:17.688 12:32:50 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:17.688 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.688 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.688 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.688 12:32:50 -- rpc/rpc.sh@42 -- # info='{ 00:05:17.688 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid308187", 00:05:17.688 "tpoint_group_mask": "0x8", 00:05:17.688 "iscsi_conn": { 00:05:17.688 "mask": "0x2", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "scsi": { 00:05:17.688 "mask": "0x4", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "bdev": { 00:05:17.688 "mask": "0x8", 00:05:17.688 "tpoint_mask": "0xffffffffffffffff" 00:05:17.688 }, 00:05:17.688 "nvmf_rdma": { 00:05:17.688 "mask": "0x10", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "nvmf_tcp": { 00:05:17.688 "mask": "0x20", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "ftl": { 00:05:17.688 "mask": "0x40", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "blobfs": { 00:05:17.688 "mask": "0x80", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "dsa": { 00:05:17.688 "mask": "0x200", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "thread": { 00:05:17.688 "mask": "0x400", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "nvme_pcie": { 00:05:17.688 "mask": "0x800", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "iaa": { 00:05:17.688 "mask": "0x1000", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "nvme_tcp": { 00:05:17.688 "mask": "0x2000", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 }, 00:05:17.688 "bdev_nvme": { 00:05:17.688 "mask": "0x4000", 00:05:17.688 "tpoint_mask": "0x0" 00:05:17.688 } 00:05:17.688 }' 00:05:17.688 12:32:50 -- rpc/rpc.sh@43 -- # jq length 00:05:17.688 12:32:50 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:17.688 12:32:50 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:17.688 12:32:50 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:17.688 12:32:50 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:17.974 12:32:50 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:17.975 12:32:50 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:17.975 12:32:50 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:17.975 12:32:50 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.975 12:32:50 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.975 00:05:17.975 real 0m0.250s 00:05:17.975 user 0m0.209s 00:05:17.975 sys 0m0.030s 00:05:17.975 12:32:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.975 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.975 ************************************ 00:05:17.975 END TEST rpc_trace_cmd_test 00:05:17.975 ************************************ 00:05:17.975 12:32:50 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:17.975 12:32:50 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.975 12:32:50 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.975 12:32:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.975 12:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.975 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.975 ************************************ 00:05:17.975 START TEST rpc_daemon_integrity 00:05:17.975 ************************************ 00:05:17.975 12:32:50 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:17.975 12:32:50 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.975 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.975 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.975 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.975 12:32:50 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.975 12:32:50 -- rpc/rpc.sh@13 -- # jq length 00:05:17.975 12:32:50 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.975 12:32:50 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.975 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.975 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.975 12:32:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.975 12:32:50 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:17.975 12:32:50 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.975 12:32:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.975 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.975 12:32:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.975 12:32:51 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.975 { 00:05:17.975 "name": "Malloc2", 00:05:17.975 "aliases": [ 00:05:17.975 "ff163d6e-6777-46cc-b09d-d3aa0b5f6d49" 00:05:17.975 ], 00:05:17.975 "product_name": "Malloc disk", 00:05:17.975 "block_size": 512, 00:05:17.975 "num_blocks": 16384, 00:05:17.975 "uuid": "ff163d6e-6777-46cc-b09d-d3aa0b5f6d49", 00:05:17.975 "assigned_rate_limits": { 00:05:17.975 "rw_ios_per_sec": 0, 00:05:17.975 "rw_mbytes_per_sec": 0, 00:05:17.975 "r_mbytes_per_sec": 0, 00:05:17.975 "w_mbytes_per_sec": 0 00:05:17.975 }, 00:05:17.975 "claimed": false, 00:05:17.975 "zoned": false, 00:05:17.975 "supported_io_types": { 00:05:17.975 "read": true, 00:05:17.975 "write": true, 00:05:17.975 "unmap": true, 00:05:17.975 "write_zeroes": true, 00:05:17.975 "flush": true, 00:05:17.975 "reset": true, 00:05:17.975 "compare": false, 00:05:17.975 "compare_and_write": false, 00:05:17.975 "abort": true, 00:05:17.975 "nvme_admin": false, 00:05:17.975 "nvme_io": false 00:05:17.975 }, 00:05:17.975 "memory_domains": [ 00:05:17.975 { 00:05:17.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.975 "dma_device_type": 2 00:05:17.975 } 00:05:17.975 ], 00:05:17.975 "driver_specific": {} 00:05:17.975 } 00:05:17.975 ]' 00:05:17.975 12:32:51 -- rpc/rpc.sh@17 -- # jq length 00:05:17.975 12:32:51 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.975 12:32:51 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:17.975 12:32:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.975 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.975 [2024-11-20 12:32:51.064328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:17.975 [2024-11-20 12:32:51.064370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.975 [2024-11-20 12:32:51.064388] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd8c160 00:05:17.975 [2024-11-20 12:32:51.064396] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.975 [2024-11-20 12:32:51.065765] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.975 [2024-11-20 12:32:51.065798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.975 Passthru0 00:05:17.975 12:32:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.975 12:32:51 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.975 12:32:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.975 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.273 12:32:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.273 12:32:51 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.273 { 00:05:18.273 "name": "Malloc2", 00:05:18.273 "aliases": [ 00:05:18.273 "ff163d6e-6777-46cc-b09d-d3aa0b5f6d49" 00:05:18.273 ], 00:05:18.273 "product_name": "Malloc disk", 00:05:18.273 "block_size": 512, 00:05:18.273 "num_blocks": 16384, 00:05:18.273 "uuid": "ff163d6e-6777-46cc-b09d-d3aa0b5f6d49", 00:05:18.273 "assigned_rate_limits": { 00:05:18.273 "rw_ios_per_sec": 0, 00:05:18.273 "rw_mbytes_per_sec": 0, 00:05:18.273 "r_mbytes_per_sec": 0, 00:05:18.273 "w_mbytes_per_sec": 0 00:05:18.273 }, 00:05:18.273 "claimed": true, 00:05:18.273 "claim_type": "exclusive_write", 00:05:18.273 "zoned": false, 00:05:18.273 "supported_io_types": { 00:05:18.273 "read": true, 00:05:18.273 "write": true, 00:05:18.273 "unmap": true, 00:05:18.273 "write_zeroes": true, 00:05:18.273 "flush": true, 00:05:18.273 "reset": true, 00:05:18.273 "compare": false, 00:05:18.273 "compare_and_write": false, 00:05:18.273 "abort": true, 00:05:18.273 "nvme_admin": false, 00:05:18.273 "nvme_io": false 00:05:18.273 }, 00:05:18.273 "memory_domains": [ 00:05:18.273 { 00:05:18.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.273 "dma_device_type": 2 00:05:18.273 } 00:05:18.273 ], 00:05:18.273 "driver_specific": {} 00:05:18.273 }, 00:05:18.273 { 00:05:18.273 "name": "Passthru0", 00:05:18.273 "aliases": [ 00:05:18.273 "59f76080-c121-508d-b678-c4a789d5aae5" 00:05:18.273 ], 00:05:18.273 "product_name": "passthru", 00:05:18.273 "block_size": 512, 00:05:18.273 "num_blocks": 16384, 00:05:18.273 "uuid": "59f76080-c121-508d-b678-c4a789d5aae5", 00:05:18.273 "assigned_rate_limits": { 00:05:18.273 "rw_ios_per_sec": 0, 00:05:18.273 "rw_mbytes_per_sec": 0, 00:05:18.273 "r_mbytes_per_sec": 0, 00:05:18.273 "w_mbytes_per_sec": 0 00:05:18.273 }, 00:05:18.273 "claimed": false, 00:05:18.273 "zoned": false, 00:05:18.273 "supported_io_types": { 00:05:18.273 "read": true, 00:05:18.273 "write": true, 00:05:18.273 "unmap": true, 00:05:18.273 "write_zeroes": true, 00:05:18.273 "flush": true, 00:05:18.273 "reset": true, 00:05:18.273 "compare": false, 00:05:18.273 "compare_and_write": false, 00:05:18.273 "abort": true, 00:05:18.273 "nvme_admin": false, 00:05:18.273 "nvme_io": false 00:05:18.273 }, 00:05:18.273 "memory_domains": [ 00:05:18.273 { 00:05:18.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.273 "dma_device_type": 2 00:05:18.273 } 00:05:18.273 ], 00:05:18.273 "driver_specific": { 00:05:18.274 "passthru": { 00:05:18.274 "name": "Passthru0", 00:05:18.274 "base_bdev_name": "Malloc2" 00:05:18.274 } 00:05:18.274 } 00:05:18.274 } 00:05:18.274 ]' 00:05:18.274 12:32:51 -- rpc/rpc.sh@21 -- # jq length 00:05:18.274 12:32:51 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.274 12:32:51 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.274 12:32:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.274 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.274 12:32:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.274 12:32:51 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.274 12:32:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.274 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.274 12:32:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.274 12:32:51 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.274 12:32:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.274 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.274 12:32:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.274 12:32:51 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.274 12:32:51 -- rpc/rpc.sh@26 -- # jq length 00:05:18.274 12:32:51 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.274 00:05:18.274 real 0m0.288s 00:05:18.274 user 0m0.186s 00:05:18.274 sys 0m0.038s 00:05:18.274 12:32:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.274 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.274 ************************************ 00:05:18.274 END TEST rpc_daemon_integrity 00:05:18.274 ************************************ 00:05:18.274 12:32:51 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:18.274 12:32:51 -- rpc/rpc.sh@84 -- # killprocess 308187 00:05:18.274 12:32:51 -- common/autotest_common.sh@936 -- # '[' -z 308187 ']' 00:05:18.274 12:32:51 -- common/autotest_common.sh@940 -- # kill -0 308187 00:05:18.274 12:32:51 -- common/autotest_common.sh@941 -- # uname 00:05:18.274 12:32:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.274 12:32:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 308187 00:05:18.274 12:32:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.274 12:32:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.274 12:32:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 308187' 00:05:18.274 killing process with pid 308187 00:05:18.274 12:32:51 -- common/autotest_common.sh@955 -- # kill 308187 00:05:18.274 12:32:51 -- common/autotest_common.sh@960 -- # wait 308187 00:05:18.563 00:05:18.563 real 0m2.534s 00:05:18.563 user 0m3.211s 00:05:18.563 sys 0m0.745s 00:05:18.563 12:32:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.563 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.563 ************************************ 00:05:18.563 END TEST rpc 00:05:18.563 ************************************ 00:05:18.563 12:32:51 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:18.563 12:32:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.563 12:32:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.563 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.563 ************************************ 00:05:18.563 START TEST rpc_client 00:05:18.563 ************************************ 00:05:18.563 12:32:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:18.827 * Looking for test storage... 00:05:18.827 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:18.827 12:32:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:18.827 12:32:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:18.827 12:32:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:18.827 12:32:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:18.827 12:32:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:18.827 12:32:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:18.827 12:32:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:18.827 12:32:51 -- scripts/common.sh@335 -- # IFS=.-: 00:05:18.827 12:32:51 -- scripts/common.sh@335 -- # read -ra ver1 00:05:18.827 12:32:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.827 12:32:51 -- scripts/common.sh@336 -- # read -ra ver2 00:05:18.827 12:32:51 -- scripts/common.sh@337 -- # local 'op=<' 00:05:18.827 12:32:51 -- scripts/common.sh@339 -- # ver1_l=2 00:05:18.827 12:32:51 -- scripts/common.sh@340 -- # ver2_l=1 00:05:18.827 12:32:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:18.827 12:32:51 -- scripts/common.sh@343 -- # case "$op" in 00:05:18.827 12:32:51 -- scripts/common.sh@344 -- # : 1 00:05:18.827 12:32:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:18.827 12:32:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.827 12:32:51 -- scripts/common.sh@364 -- # decimal 1 00:05:18.827 12:32:51 -- scripts/common.sh@352 -- # local d=1 00:05:18.827 12:32:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.827 12:32:51 -- scripts/common.sh@354 -- # echo 1 00:05:18.827 12:32:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:18.827 12:32:51 -- scripts/common.sh@365 -- # decimal 2 00:05:18.827 12:32:51 -- scripts/common.sh@352 -- # local d=2 00:05:18.827 12:32:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.827 12:32:51 -- scripts/common.sh@354 -- # echo 2 00:05:18.827 12:32:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:18.827 12:32:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:18.827 12:32:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:18.827 12:32:51 -- scripts/common.sh@367 -- # return 0 00:05:18.827 12:32:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.827 12:32:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:18.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.827 --rc genhtml_branch_coverage=1 00:05:18.827 --rc genhtml_function_coverage=1 00:05:18.827 --rc genhtml_legend=1 00:05:18.827 --rc geninfo_all_blocks=1 00:05:18.827 --rc geninfo_unexecuted_blocks=1 00:05:18.827 00:05:18.827 ' 00:05:18.827 12:32:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:18.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.827 --rc genhtml_branch_coverage=1 00:05:18.827 --rc genhtml_function_coverage=1 00:05:18.827 --rc genhtml_legend=1 00:05:18.827 --rc geninfo_all_blocks=1 00:05:18.827 --rc geninfo_unexecuted_blocks=1 00:05:18.827 00:05:18.827 ' 00:05:18.827 12:32:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:18.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.827 --rc genhtml_branch_coverage=1 00:05:18.827 --rc genhtml_function_coverage=1 00:05:18.827 --rc genhtml_legend=1 00:05:18.827 --rc geninfo_all_blocks=1 00:05:18.827 --rc geninfo_unexecuted_blocks=1 00:05:18.827 00:05:18.827 ' 00:05:18.827 12:32:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:18.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.827 --rc genhtml_branch_coverage=1 00:05:18.827 --rc genhtml_function_coverage=1 00:05:18.827 --rc genhtml_legend=1 00:05:18.827 --rc geninfo_all_blocks=1 00:05:18.827 --rc geninfo_unexecuted_blocks=1 00:05:18.827 00:05:18.827 ' 00:05:18.827 12:32:51 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:18.827 OK 00:05:18.827 12:32:51 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:18.827 00:05:18.827 real 0m0.221s 00:05:18.827 user 0m0.129s 00:05:18.827 sys 0m0.107s 00:05:18.827 12:32:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.827 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.827 ************************************ 00:05:18.827 END TEST rpc_client 00:05:18.827 ************************************ 00:05:18.827 12:32:51 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:18.827 12:32:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.827 12:32:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.827 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.827 ************************************ 00:05:18.827 START TEST json_config 00:05:18.827 ************************************ 00:05:18.827 12:32:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.092 12:32:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.092 12:32:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.092 12:32:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.092 12:32:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.092 12:32:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.092 12:32:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.092 12:32:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.092 12:32:52 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.092 12:32:52 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.092 12:32:52 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.092 12:32:52 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.092 12:32:52 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.092 12:32:52 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.092 12:32:52 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.092 12:32:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.092 12:32:52 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.092 12:32:52 -- scripts/common.sh@344 -- # : 1 00:05:19.092 12:32:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.092 12:32:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.092 12:32:52 -- scripts/common.sh@364 -- # decimal 1 00:05:19.092 12:32:52 -- scripts/common.sh@352 -- # local d=1 00:05:19.092 12:32:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.092 12:32:52 -- scripts/common.sh@354 -- # echo 1 00:05:19.092 12:32:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:19.092 12:32:52 -- scripts/common.sh@365 -- # decimal 2 00:05:19.092 12:32:52 -- scripts/common.sh@352 -- # local d=2 00:05:19.092 12:32:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.092 12:32:52 -- scripts/common.sh@354 -- # echo 2 00:05:19.092 12:32:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:19.092 12:32:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:19.092 12:32:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:19.092 12:32:52 -- scripts/common.sh@367 -- # return 0 00:05:19.092 12:32:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.093 12:32:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:19.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.093 --rc genhtml_branch_coverage=1 00:05:19.093 --rc genhtml_function_coverage=1 00:05:19.093 --rc genhtml_legend=1 00:05:19.093 --rc geninfo_all_blocks=1 00:05:19.093 --rc geninfo_unexecuted_blocks=1 00:05:19.093 00:05:19.093 ' 00:05:19.093 12:32:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:19.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.093 --rc genhtml_branch_coverage=1 00:05:19.093 --rc genhtml_function_coverage=1 00:05:19.093 --rc genhtml_legend=1 00:05:19.093 --rc geninfo_all_blocks=1 00:05:19.093 --rc geninfo_unexecuted_blocks=1 00:05:19.093 00:05:19.093 ' 00:05:19.093 12:32:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:19.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.093 --rc genhtml_branch_coverage=1 00:05:19.093 --rc genhtml_function_coverage=1 00:05:19.093 --rc genhtml_legend=1 00:05:19.093 --rc geninfo_all_blocks=1 00:05:19.093 --rc geninfo_unexecuted_blocks=1 00:05:19.093 00:05:19.093 ' 00:05:19.093 12:32:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:19.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.093 --rc genhtml_branch_coverage=1 00:05:19.093 --rc genhtml_function_coverage=1 00:05:19.093 --rc genhtml_legend=1 00:05:19.093 --rc geninfo_all_blocks=1 00:05:19.093 --rc geninfo_unexecuted_blocks=1 00:05:19.093 00:05:19.093 ' 00:05:19.093 12:32:52 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.093 12:32:52 -- nvmf/common.sh@7 -- # uname -s 00:05:19.093 12:32:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.093 12:32:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.093 12:32:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.093 12:32:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.093 12:32:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.093 12:32:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.093 12:32:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.093 12:32:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.093 12:32:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.093 12:32:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.093 12:32:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:19.093 12:32:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:19.093 12:32:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.093 12:32:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.093 12:32:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.093 12:32:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:19.093 12:32:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.093 12:32:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.093 12:32:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.093 12:32:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.093 12:32:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.093 12:32:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.093 12:32:52 -- paths/export.sh@5 -- # export PATH 00:05:19.093 12:32:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.093 12:32:52 -- nvmf/common.sh@46 -- # : 0 00:05:19.093 12:32:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:19.093 12:32:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:19.093 12:32:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:19.093 12:32:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.093 12:32:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.093 12:32:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:19.093 12:32:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:19.093 12:32:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:19.093 12:32:52 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:19.093 12:32:52 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:19.093 12:32:52 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:19.093 12:32:52 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.093 12:32:52 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.093 12:32:52 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:19.093 12:32:52 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.093 12:32:52 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:19.093 12:32:52 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.093 12:32:52 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:19.093 12:32:52 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:19.093 12:32:52 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:19.093 12:32:52 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:19.093 12:32:52 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.093 12:32:52 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:19.093 INFO: JSON configuration test init 00:05:19.093 12:32:52 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:19.093 12:32:52 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:19.093 12:32:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.093 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.093 12:32:52 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:19.093 12:32:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.093 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.093 12:32:52 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:19.093 12:32:52 -- json_config/json_config.sh@98 -- # local app=target 00:05:19.093 12:32:52 -- json_config/json_config.sh@99 -- # shift 00:05:19.093 12:32:52 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:19.093 12:32:52 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:19.093 12:32:52 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:19.093 12:32:52 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:19.093 12:32:52 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:19.093 12:32:52 -- json_config/json_config.sh@111 -- # app_pid[$app]=308983 00:05:19.093 12:32:52 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:19.093 Waiting for target to run... 00:05:19.093 12:32:52 -- json_config/json_config.sh@114 -- # waitforlisten 308983 /var/tmp/spdk_tgt.sock 00:05:19.093 12:32:52 -- common/autotest_common.sh@829 -- # '[' -z 308983 ']' 00:05:19.093 12:32:52 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:19.093 12:32:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.093 12:32:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.093 12:32:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.093 12:32:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.093 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.093 [2024-11-20 12:32:52.161112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.093 [2024-11-20 12:32:52.161186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308983 ] 00:05:19.093 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.685 [2024-11-20 12:32:52.473301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.685 [2024-11-20 12:32:52.541005] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.685 [2024-11-20 12:32:52.541177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.962 12:32:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.962 12:32:52 -- common/autotest_common.sh@862 -- # return 0 00:05:19.962 12:32:52 -- json_config/json_config.sh@115 -- # echo '' 00:05:19.962 00:05:19.962 12:32:52 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:19.962 12:32:52 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:19.962 12:32:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.962 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.962 12:32:52 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:19.962 12:32:52 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:19.962 12:32:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.962 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.962 12:32:52 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:19.962 12:32:52 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:19.962 12:32:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:20.549 12:32:53 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:20.549 12:32:53 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:20.549 12:32:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.549 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:05:20.549 12:32:53 -- json_config/json_config.sh@48 -- # local ret=0 00:05:20.549 12:32:53 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:20.549 12:32:53 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:20.549 12:32:53 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:20.549 12:32:53 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:20.549 12:32:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:20.819 12:32:53 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:20.819 12:32:53 -- json_config/json_config.sh@51 -- # local get_types 00:05:20.819 12:32:53 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:20.819 12:32:53 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:20.819 12:32:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.819 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:05:20.819 12:32:53 -- json_config/json_config.sh@58 -- # return 0 00:05:20.819 12:32:53 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:20.819 12:32:53 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:20.819 12:32:53 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:20.819 12:32:53 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:20.819 12:32:53 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:20.819 12:32:53 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:20.819 12:32:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.819 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:05:20.819 12:32:53 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:20.819 12:32:53 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:05:20.819 12:32:53 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:05:20.819 12:32:53 -- json_config/json_config.sh@287 -- # nvmftestinit 00:05:20.819 12:32:53 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:05:20.819 12:32:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:20.819 12:32:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:05:20.819 12:32:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:05:20.819 12:32:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:05:20.819 12:32:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:20.819 12:32:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:20.819 12:32:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:20.819 12:32:53 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:05:20.819 12:32:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:05:20.819 12:32:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:05:20.819 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.592 12:33:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:27.592 12:33:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:05:27.592 12:33:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:05:27.592 12:33:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:05:27.592 12:33:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:05:27.592 12:33:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:05:27.592 12:33:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:05:27.592 12:33:00 -- nvmf/common.sh@294 -- # net_devs=() 00:05:27.592 12:33:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:05:27.592 12:33:00 -- nvmf/common.sh@295 -- # e810=() 00:05:27.592 12:33:00 -- nvmf/common.sh@295 -- # local -ga e810 00:05:27.592 12:33:00 -- nvmf/common.sh@296 -- # x722=() 00:05:27.592 12:33:00 -- nvmf/common.sh@296 -- # local -ga x722 00:05:27.592 12:33:00 -- nvmf/common.sh@297 -- # mlx=() 00:05:27.592 12:33:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:05:27.592 12:33:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:27.592 12:33:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:05:27.592 12:33:00 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:05:27.592 12:33:00 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:05:27.592 12:33:00 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:05:27.592 12:33:00 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:05:27.592 12:33:00 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:05:27.592 12:33:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:05:27.592 12:33:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:27.592 12:33:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:05:27.592 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:05:27.862 12:33:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:27.862 12:33:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:27.862 12:33:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:27.862 12:33:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:27.862 12:33:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:27.862 12:33:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:27.862 12:33:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:27.862 12:33:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:05:27.862 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:05:27.862 12:33:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:27.862 12:33:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:27.862 12:33:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:27.862 12:33:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:27.862 12:33:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:27.862 12:33:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:27.862 12:33:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:05:27.863 12:33:00 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:05:27.863 12:33:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:27.863 12:33:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.863 12:33:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:27.863 12:33:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.863 12:33:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:05:27.863 Found net devices under 0000:98:00.0: mlx_0_0 00:05:27.863 12:33:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.863 12:33:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:27.863 12:33:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.863 12:33:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:27.863 12:33:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.863 12:33:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:05:27.863 Found net devices under 0000:98:00.1: mlx_0_1 00:05:27.863 12:33:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.863 12:33:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:05:27.863 12:33:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:05:27.863 12:33:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:05:27.863 12:33:00 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:05:27.863 12:33:00 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:05:27.863 12:33:00 -- nvmf/common.sh@408 -- # rdma_device_init 00:05:27.863 12:33:00 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:05:27.863 12:33:00 -- nvmf/common.sh@57 -- # uname 00:05:27.863 12:33:00 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:05:27.863 12:33:00 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:05:27.863 12:33:00 -- nvmf/common.sh@62 -- # modprobe ib_core 00:05:27.863 12:33:00 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:05:27.863 12:33:00 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:05:27.863 12:33:00 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:05:27.863 12:33:00 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:05:27.863 12:33:00 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:05:27.863 12:33:00 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:05:27.863 12:33:00 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:27.863 12:33:00 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:05:27.863 12:33:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:27.863 12:33:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:27.863 12:33:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:27.863 12:33:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:27.863 12:33:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:27.863 12:33:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:27.863 12:33:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:27.863 12:33:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:27.863 12:33:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:27.863 12:33:00 -- nvmf/common.sh@104 -- # continue 2 00:05:27.863 12:33:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:27.863 12:33:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:27.863 12:33:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:27.863 12:33:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:27.863 12:33:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:27.863 12:33:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:27.863 12:33:00 -- nvmf/common.sh@104 -- # continue 2 00:05:27.863 12:33:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:27.863 12:33:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:05:27.863 12:33:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:27.863 12:33:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:27.863 12:33:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:27.863 12:33:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:27.863 12:33:00 -- nvmf/common.sh@73 -- # ip= 00:05:27.863 12:33:00 -- nvmf/common.sh@74 -- # [[ -z '' ]] 00:05:27.863 12:33:00 -- nvmf/common.sh@75 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:05:27.863 12:33:00 -- nvmf/common.sh@76 -- # ip link set mlx_0_0 up 00:05:27.863 12:33:00 -- nvmf/common.sh@77 -- # (( count = count + 1 )) 00:05:27.863 12:33:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:05:27.863 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:27.863 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:05:27.863 altname enp152s0f0np0 00:05:27.863 altname ens817f0np0 00:05:27.863 inet 192.168.100.8/24 scope global mlx_0_0 00:05:27.863 valid_lft forever preferred_lft forever 00:05:27.863 12:33:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:28.141 12:33:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:05:28.141 12:33:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:28.141 12:33:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:28.141 12:33:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:28.141 12:33:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:28.141 12:33:00 -- nvmf/common.sh@73 -- # ip= 00:05:28.141 12:33:00 -- nvmf/common.sh@74 -- # [[ -z '' ]] 00:05:28.141 12:33:00 -- nvmf/common.sh@75 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:05:28.141 12:33:00 -- nvmf/common.sh@76 -- # ip link set mlx_0_1 up 00:05:28.141 12:33:00 -- nvmf/common.sh@77 -- # (( count = count + 1 )) 00:05:28.141 12:33:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:05:28.141 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:28.141 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:05:28.141 altname enp152s0f1np1 00:05:28.141 altname ens817f1np1 00:05:28.141 inet 192.168.100.9/24 scope global mlx_0_1 00:05:28.141 valid_lft forever preferred_lft forever 00:05:28.141 12:33:00 -- nvmf/common.sh@410 -- # return 0 00:05:28.141 12:33:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:05:28.141 12:33:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:28.141 12:33:00 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:05:28.141 12:33:00 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:05:28.141 12:33:00 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:05:28.141 12:33:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:28.141 12:33:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:28.141 12:33:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:28.141 12:33:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:28.141 12:33:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:28.141 12:33:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:28.141 12:33:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:28.141 12:33:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:28.141 12:33:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:28.141 12:33:01 -- nvmf/common.sh@104 -- # continue 2 00:05:28.141 12:33:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:28.141 12:33:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:28.141 12:33:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:28.141 12:33:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:28.141 12:33:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:28.141 12:33:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:28.141 12:33:01 -- nvmf/common.sh@104 -- # continue 2 00:05:28.141 12:33:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:28.141 12:33:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:05:28.141 12:33:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:28.141 12:33:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:28.141 12:33:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:28.141 12:33:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:28.141 12:33:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:28.141 12:33:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:05:28.141 12:33:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:28.141 12:33:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:28.141 12:33:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:28.141 12:33:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:28.141 12:33:01 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:05:28.141 192.168.100.9' 00:05:28.141 12:33:01 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:05:28.141 192.168.100.9' 00:05:28.141 12:33:01 -- nvmf/common.sh@445 -- # head -n 1 00:05:28.141 12:33:01 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:28.141 12:33:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:28.141 192.168.100.9' 00:05:28.141 12:33:01 -- nvmf/common.sh@446 -- # tail -n +2 00:05:28.141 12:33:01 -- nvmf/common.sh@446 -- # head -n 1 00:05:28.141 12:33:01 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:28.141 12:33:01 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:05:28.141 12:33:01 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:28.141 12:33:01 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:05:28.141 12:33:01 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:05:28.141 12:33:01 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:05:28.141 12:33:01 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:05:28.141 12:33:01 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.142 12:33:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.142 MallocForNvmf0 00:05:28.429 12:33:01 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.429 12:33:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.429 MallocForNvmf1 00:05:28.429 12:33:01 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:28.429 12:33:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:28.725 [2024-11-20 12:33:01.575283] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:28.725 [2024-11-20 12:33:01.640966] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ac98b0/0x1ad6640) succeed. 00:05:28.725 [2024-11-20 12:33:01.651498] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1acbaa0/0x1b56680) succeed. 00:05:28.725 12:33:01 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:28.725 12:33:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.012 12:33:01 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.012 12:33:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.012 12:33:02 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.012 12:33:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.287 12:33:02 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:29.288 12:33:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:29.288 [2024-11-20 12:33:02.362195] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:29.288 12:33:02 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:29.288 12:33:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.288 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:29.568 12:33:02 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:29.568 12:33:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.568 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:29.568 12:33:02 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:29.568 12:33:02 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:29.568 12:33:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:29.568 MallocBdevForConfigChangeCheck 00:05:29.568 12:33:02 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:29.568 12:33:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.568 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:29.866 12:33:02 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:29.866 12:33:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.866 12:33:02 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:29.866 INFO: shutting down applications... 00:05:29.866 12:33:02 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:29.866 12:33:02 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:29.866 12:33:02 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:29.866 12:33:02 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:30.498 Calling clear_iscsi_subsystem 00:05:30.498 Calling clear_nvmf_subsystem 00:05:30.498 Calling clear_nbd_subsystem 00:05:30.498 Calling clear_ublk_subsystem 00:05:30.498 Calling clear_vhost_blk_subsystem 00:05:30.498 Calling clear_vhost_scsi_subsystem 00:05:30.498 Calling clear_scheduler_subsystem 00:05:30.498 Calling clear_bdev_subsystem 00:05:30.498 Calling clear_accel_subsystem 00:05:30.498 Calling clear_vmd_subsystem 00:05:30.498 Calling clear_sock_subsystem 00:05:30.498 Calling clear_iobuf_subsystem 00:05:30.498 12:33:03 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:30.498 12:33:03 -- json_config/json_config.sh@396 -- # count=100 00:05:30.498 12:33:03 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:30.498 12:33:03 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.498 12:33:03 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:30.498 12:33:03 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:30.796 12:33:03 -- json_config/json_config.sh@398 -- # break 00:05:30.796 12:33:03 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:30.796 12:33:03 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:30.796 12:33:03 -- json_config/json_config.sh@120 -- # local app=target 00:05:30.796 12:33:03 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:30.796 12:33:03 -- json_config/json_config.sh@124 -- # [[ -n 308983 ]] 00:05:30.796 12:33:03 -- json_config/json_config.sh@127 -- # kill -SIGINT 308983 00:05:30.796 12:33:03 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:30.796 12:33:03 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:30.796 12:33:03 -- json_config/json_config.sh@130 -- # kill -0 308983 00:05:30.796 12:33:03 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:31.091 12:33:04 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:31.091 12:33:04 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:31.091 12:33:04 -- json_config/json_config.sh@130 -- # kill -0 308983 00:05:31.091 12:33:04 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:31.382 12:33:04 -- json_config/json_config.sh@132 -- # break 00:05:31.382 12:33:04 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:31.382 12:33:04 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:31.382 SPDK target shutdown done 00:05:31.382 12:33:04 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:31.382 INFO: relaunching applications... 00:05:31.382 12:33:04 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.382 12:33:04 -- json_config/json_config.sh@98 -- # local app=target 00:05:31.382 12:33:04 -- json_config/json_config.sh@99 -- # shift 00:05:31.382 12:33:04 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:31.382 12:33:04 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:31.382 12:33:04 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:31.382 12:33:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:31.382 12:33:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:31.382 12:33:04 -- json_config/json_config.sh@111 -- # app_pid[$app]=314009 00:05:31.382 12:33:04 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:31.382 Waiting for target to run... 00:05:31.382 12:33:04 -- json_config/json_config.sh@114 -- # waitforlisten 314009 /var/tmp/spdk_tgt.sock 00:05:31.382 12:33:04 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.382 12:33:04 -- common/autotest_common.sh@829 -- # '[' -z 314009 ']' 00:05:31.382 12:33:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.382 12:33:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.382 12:33:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.382 12:33:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.382 12:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:31.382 [2024-11-20 12:33:04.242138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.382 [2024-11-20 12:33:04.242214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314009 ] 00:05:31.382 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.660 [2024-11-20 12:33:04.562256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.660 [2024-11-20 12:33:04.613700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.660 [2024-11-20 12:33:04.613811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.285 [2024-11-20 12:33:05.108910] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x91d390/0x929800) succeed. 00:05:32.286 [2024-11-20 12:33:05.118502] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x91f580/0x96aea0) succeed. 00:05:32.286 [2024-11-20 12:33:05.166022] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:32.889 12:33:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.889 12:33:05 -- common/autotest_common.sh@862 -- # return 0 00:05:32.889 12:33:05 -- json_config/json_config.sh@115 -- # echo '' 00:05:32.889 00:05:32.889 12:33:05 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:32.889 12:33:05 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:32.889 INFO: Checking if target configuration is the same... 00:05:32.889 12:33:05 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.889 12:33:05 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:32.889 12:33:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.889 + '[' 2 -ne 2 ']' 00:05:32.889 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.889 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:32.889 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:32.889 +++ basename /dev/fd/62 00:05:32.889 ++ mktemp /tmp/62.XXX 00:05:32.889 + tmp_file_1=/tmp/62.C6f 00:05:32.889 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.889 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.889 + tmp_file_2=/tmp/spdk_tgt_config.json.R4k 00:05:32.889 + ret=0 00:05:32.889 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.170 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.170 + diff -u /tmp/62.C6f /tmp/spdk_tgt_config.json.R4k 00:05:33.170 + echo 'INFO: JSON config files are the same' 00:05:33.170 INFO: JSON config files are the same 00:05:33.170 + rm /tmp/62.C6f /tmp/spdk_tgt_config.json.R4k 00:05:33.170 + exit 0 00:05:33.170 12:33:06 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:33.170 12:33:06 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:33.170 INFO: changing configuration and checking if this can be detected... 00:05:33.170 12:33:06 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.170 12:33:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.170 12:33:06 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.170 12:33:06 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:33.170 12:33:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.170 + '[' 2 -ne 2 ']' 00:05:33.170 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:33.170 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:33.170 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:33.170 +++ basename /dev/fd/62 00:05:33.170 ++ mktemp /tmp/62.XXX 00:05:33.170 + tmp_file_1=/tmp/62.YNk 00:05:33.170 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.170 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.170 + tmp_file_2=/tmp/spdk_tgt_config.json.w1A 00:05:33.170 + ret=0 00:05:33.170 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.491 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.491 + diff -u /tmp/62.YNk /tmp/spdk_tgt_config.json.w1A 00:05:33.491 + ret=1 00:05:33.491 + echo '=== Start of file: /tmp/62.YNk ===' 00:05:33.491 + cat /tmp/62.YNk 00:05:33.491 + echo '=== End of file: /tmp/62.YNk ===' 00:05:33.491 + echo '' 00:05:33.491 + echo '=== Start of file: /tmp/spdk_tgt_config.json.w1A ===' 00:05:33.491 + cat /tmp/spdk_tgt_config.json.w1A 00:05:33.491 + echo '=== End of file: /tmp/spdk_tgt_config.json.w1A ===' 00:05:33.491 + echo '' 00:05:33.491 + rm /tmp/62.YNk /tmp/spdk_tgt_config.json.w1A 00:05:33.491 + exit 1 00:05:33.491 12:33:06 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:33.491 INFO: configuration change detected. 00:05:33.491 12:33:06 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:33.491 12:33:06 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:33.491 12:33:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.491 12:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:33.783 12:33:06 -- json_config/json_config.sh@360 -- # local ret=0 00:05:33.783 12:33:06 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:33.783 12:33:06 -- json_config/json_config.sh@370 -- # [[ -n 314009 ]] 00:05:33.783 12:33:06 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:33.783 12:33:06 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:33.783 12:33:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.783 12:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:33.783 12:33:06 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:33.783 12:33:06 -- json_config/json_config.sh@246 -- # uname -s 00:05:33.783 12:33:06 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:33.783 12:33:06 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:33.783 12:33:06 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:33.783 12:33:06 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:33.783 12:33:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.783 12:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:33.783 12:33:06 -- json_config/json_config.sh@376 -- # killprocess 314009 00:05:33.783 12:33:06 -- common/autotest_common.sh@936 -- # '[' -z 314009 ']' 00:05:33.783 12:33:06 -- common/autotest_common.sh@940 -- # kill -0 314009 00:05:33.783 12:33:06 -- common/autotest_common.sh@941 -- # uname 00:05:33.783 12:33:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.784 12:33:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 314009 00:05:33.784 12:33:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.784 12:33:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.784 12:33:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 314009' 00:05:33.784 killing process with pid 314009 00:05:33.784 12:33:06 -- common/autotest_common.sh@955 -- # kill 314009 00:05:33.784 12:33:06 -- common/autotest_common.sh@960 -- # wait 314009 00:05:34.065 12:33:06 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.065 12:33:07 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:34.065 12:33:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.065 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:34.065 12:33:07 -- json_config/json_config.sh@381 -- # return 0 00:05:34.065 12:33:07 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:34.065 INFO: Success 00:05:34.065 12:33:07 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:34.065 12:33:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:05:34.065 12:33:07 -- nvmf/common.sh@116 -- # sync 00:05:34.065 12:33:07 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:05:34.065 12:33:07 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:05:34.065 12:33:07 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:05:34.065 12:33:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:05:34.065 12:33:07 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:05:34.065 00:05:34.065 real 0m15.163s 00:05:34.065 user 0m18.697s 00:05:34.065 sys 0m7.342s 00:05:34.065 12:33:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.065 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:34.065 ************************************ 00:05:34.065 END TEST json_config 00:05:34.065 ************************************ 00:05:34.065 12:33:07 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.065 12:33:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.065 12:33:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.065 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:34.065 ************************************ 00:05:34.065 START TEST json_config_extra_key 00:05:34.065 ************************************ 00:05:34.065 12:33:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.065 12:33:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.065 12:33:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.065 12:33:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.327 12:33:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.327 12:33:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.327 12:33:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.327 12:33:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.327 12:33:07 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.327 12:33:07 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.327 12:33:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.327 12:33:07 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.327 12:33:07 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.327 12:33:07 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.327 12:33:07 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.327 12:33:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.327 12:33:07 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.327 12:33:07 -- scripts/common.sh@344 -- # : 1 00:05:34.327 12:33:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.327 12:33:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.327 12:33:07 -- scripts/common.sh@364 -- # decimal 1 00:05:34.327 12:33:07 -- scripts/common.sh@352 -- # local d=1 00:05:34.327 12:33:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.327 12:33:07 -- scripts/common.sh@354 -- # echo 1 00:05:34.327 12:33:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.327 12:33:07 -- scripts/common.sh@365 -- # decimal 2 00:05:34.327 12:33:07 -- scripts/common.sh@352 -- # local d=2 00:05:34.327 12:33:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.327 12:33:07 -- scripts/common.sh@354 -- # echo 2 00:05:34.327 12:33:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.327 12:33:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.327 12:33:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.327 12:33:07 -- scripts/common.sh@367 -- # return 0 00:05:34.327 12:33:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.327 12:33:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.327 --rc genhtml_branch_coverage=1 00:05:34.327 --rc genhtml_function_coverage=1 00:05:34.327 --rc genhtml_legend=1 00:05:34.327 --rc geninfo_all_blocks=1 00:05:34.327 --rc geninfo_unexecuted_blocks=1 00:05:34.327 00:05:34.327 ' 00:05:34.327 12:33:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.327 --rc genhtml_branch_coverage=1 00:05:34.327 --rc genhtml_function_coverage=1 00:05:34.327 --rc genhtml_legend=1 00:05:34.327 --rc geninfo_all_blocks=1 00:05:34.327 --rc geninfo_unexecuted_blocks=1 00:05:34.327 00:05:34.327 ' 00:05:34.327 12:33:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.327 --rc genhtml_branch_coverage=1 00:05:34.327 --rc genhtml_function_coverage=1 00:05:34.327 --rc genhtml_legend=1 00:05:34.327 --rc geninfo_all_blocks=1 00:05:34.327 --rc geninfo_unexecuted_blocks=1 00:05:34.327 00:05:34.327 ' 00:05:34.327 12:33:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.327 --rc genhtml_branch_coverage=1 00:05:34.327 --rc genhtml_function_coverage=1 00:05:34.327 --rc genhtml_legend=1 00:05:34.327 --rc geninfo_all_blocks=1 00:05:34.327 --rc geninfo_unexecuted_blocks=1 00:05:34.327 00:05:34.327 ' 00:05:34.327 12:33:07 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.328 12:33:07 -- nvmf/common.sh@7 -- # uname -s 00:05:34.328 12:33:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.328 12:33:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.328 12:33:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.328 12:33:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.328 12:33:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.328 12:33:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.328 12:33:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.328 12:33:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.328 12:33:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.328 12:33:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.328 12:33:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:34.328 12:33:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:34.328 12:33:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.328 12:33:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.328 12:33:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.328 12:33:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:34.328 12:33:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.328 12:33:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.328 12:33:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.328 12:33:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.328 12:33:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.328 12:33:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.328 12:33:07 -- paths/export.sh@5 -- # export PATH 00:05:34.328 12:33:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.328 12:33:07 -- nvmf/common.sh@46 -- # : 0 00:05:34.328 12:33:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:34.328 12:33:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:34.328 12:33:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:34.328 12:33:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.328 12:33:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.328 12:33:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:34.328 12:33:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:34.328 12:33:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:34.328 INFO: launching applications... 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=314693 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:34.328 Waiting for target to run... 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 314693 /var/tmp/spdk_tgt.sock 00:05:34.328 12:33:07 -- common/autotest_common.sh@829 -- # '[' -z 314693 ']' 00:05:34.328 12:33:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.328 12:33:07 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.328 12:33:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.328 12:33:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.328 12:33:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.328 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:34.328 [2024-11-20 12:33:07.349649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.328 [2024-11-20 12:33:07.349729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314693 ] 00:05:34.328 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.588 [2024-11-20 12:33:07.618901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.588 [2024-11-20 12:33:07.661777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.588 [2024-11-20 12:33:07.661880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.161 12:33:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.161 12:33:08 -- common/autotest_common.sh@862 -- # return 0 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:35.161 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:35.161 INFO: shutting down applications... 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 314693 ]] 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 314693 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 314693 00:05:35.161 12:33:08 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:35.733 12:33:08 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:35.733 12:33:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:35.733 12:33:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 314693 00:05:35.733 12:33:08 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:35.733 12:33:08 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:35.733 12:33:08 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:35.733 12:33:08 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:35.733 SPDK target shutdown done 00:05:35.733 12:33:08 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:35.733 Success 00:05:35.733 00:05:35.733 real 0m1.553s 00:05:35.733 user 0m1.181s 00:05:35.733 sys 0m0.384s 00:05:35.733 12:33:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.733 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:05:35.733 ************************************ 00:05:35.733 END TEST json_config_extra_key 00:05:35.733 ************************************ 00:05:35.733 12:33:08 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.733 12:33:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.733 12:33:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.733 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:05:35.733 ************************************ 00:05:35.733 START TEST alias_rpc 00:05:35.733 ************************************ 00:05:35.733 12:33:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.733 * Looking for test storage... 00:05:35.733 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:35.733 12:33:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:35.733 12:33:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:35.733 12:33:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:35.994 12:33:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:35.994 12:33:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:35.994 12:33:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:35.994 12:33:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:35.994 12:33:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:35.994 12:33:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:35.994 12:33:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.994 12:33:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:35.994 12:33:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:35.994 12:33:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:35.994 12:33:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:35.994 12:33:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:35.994 12:33:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:35.994 12:33:08 -- scripts/common.sh@344 -- # : 1 00:05:35.994 12:33:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:35.994 12:33:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.994 12:33:08 -- scripts/common.sh@364 -- # decimal 1 00:05:35.994 12:33:08 -- scripts/common.sh@352 -- # local d=1 00:05:35.994 12:33:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.994 12:33:08 -- scripts/common.sh@354 -- # echo 1 00:05:35.994 12:33:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:35.994 12:33:08 -- scripts/common.sh@365 -- # decimal 2 00:05:35.994 12:33:08 -- scripts/common.sh@352 -- # local d=2 00:05:35.994 12:33:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.994 12:33:08 -- scripts/common.sh@354 -- # echo 2 00:05:35.994 12:33:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:35.994 12:33:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:35.994 12:33:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:35.995 12:33:08 -- scripts/common.sh@367 -- # return 0 00:05:35.995 12:33:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.995 12:33:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:35.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.995 --rc genhtml_branch_coverage=1 00:05:35.995 --rc genhtml_function_coverage=1 00:05:35.995 --rc genhtml_legend=1 00:05:35.995 --rc geninfo_all_blocks=1 00:05:35.995 --rc geninfo_unexecuted_blocks=1 00:05:35.995 00:05:35.995 ' 00:05:35.995 12:33:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:35.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.995 --rc genhtml_branch_coverage=1 00:05:35.995 --rc genhtml_function_coverage=1 00:05:35.995 --rc genhtml_legend=1 00:05:35.995 --rc geninfo_all_blocks=1 00:05:35.995 --rc geninfo_unexecuted_blocks=1 00:05:35.995 00:05:35.995 ' 00:05:35.995 12:33:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:35.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.995 --rc genhtml_branch_coverage=1 00:05:35.995 --rc genhtml_function_coverage=1 00:05:35.995 --rc genhtml_legend=1 00:05:35.995 --rc geninfo_all_blocks=1 00:05:35.995 --rc geninfo_unexecuted_blocks=1 00:05:35.995 00:05:35.995 ' 00:05:35.995 12:33:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:35.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.995 --rc genhtml_branch_coverage=1 00:05:35.995 --rc genhtml_function_coverage=1 00:05:35.995 --rc genhtml_legend=1 00:05:35.995 --rc geninfo_all_blocks=1 00:05:35.995 --rc geninfo_unexecuted_blocks=1 00:05:35.995 00:05:35.995 ' 00:05:35.995 12:33:08 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.995 12:33:08 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=315021 00:05:35.995 12:33:08 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 315021 00:05:35.995 12:33:08 -- common/autotest_common.sh@829 -- # '[' -z 315021 ']' 00:05:35.995 12:33:08 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.995 12:33:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.995 12:33:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.995 12:33:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.995 12:33:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.995 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:05:35.995 [2024-11-20 12:33:08.947606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.995 [2024-11-20 12:33:08.947683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315021 ] 00:05:35.995 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.995 [2024-11-20 12:33:09.026678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.995 [2024-11-20 12:33:09.087028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.995 [2024-11-20 12:33:09.087131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.935 12:33:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.935 12:33:09 -- common/autotest_common.sh@862 -- # return 0 00:05:36.935 12:33:09 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:36.935 12:33:09 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 315021 00:05:36.935 12:33:09 -- common/autotest_common.sh@936 -- # '[' -z 315021 ']' 00:05:36.935 12:33:09 -- common/autotest_common.sh@940 -- # kill -0 315021 00:05:36.935 12:33:09 -- common/autotest_common.sh@941 -- # uname 00:05:36.935 12:33:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.935 12:33:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 315021 00:05:36.936 12:33:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.936 12:33:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.936 12:33:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 315021' 00:05:36.936 killing process with pid 315021 00:05:36.936 12:33:09 -- common/autotest_common.sh@955 -- # kill 315021 00:05:36.936 12:33:09 -- common/autotest_common.sh@960 -- # wait 315021 00:05:37.197 00:05:37.197 real 0m1.468s 00:05:37.197 user 0m1.581s 00:05:37.197 sys 0m0.406s 00:05:37.197 12:33:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.197 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.197 ************************************ 00:05:37.197 END TEST alias_rpc 00:05:37.197 ************************************ 00:05:37.197 12:33:10 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:37.197 12:33:10 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.197 12:33:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.197 12:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.197 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.197 ************************************ 00:05:37.197 START TEST spdkcli_tcp 00:05:37.197 ************************************ 00:05:37.197 12:33:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.197 * Looking for test storage... 00:05:37.197 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:37.197 12:33:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:37.459 12:33:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.459 12:33:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:37.459 12:33:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.459 12:33:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.459 12:33:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.459 12:33:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.459 12:33:10 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.459 12:33:10 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.459 12:33:10 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.459 12:33:10 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.459 12:33:10 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.459 12:33:10 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.459 12:33:10 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.459 12:33:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.459 12:33:10 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.459 12:33:10 -- scripts/common.sh@344 -- # : 1 00:05:37.459 12:33:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.459 12:33:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.459 12:33:10 -- scripts/common.sh@364 -- # decimal 1 00:05:37.459 12:33:10 -- scripts/common.sh@352 -- # local d=1 00:05:37.459 12:33:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.459 12:33:10 -- scripts/common.sh@354 -- # echo 1 00:05:37.459 12:33:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.459 12:33:10 -- scripts/common.sh@365 -- # decimal 2 00:05:37.459 12:33:10 -- scripts/common.sh@352 -- # local d=2 00:05:37.459 12:33:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.459 12:33:10 -- scripts/common.sh@354 -- # echo 2 00:05:37.459 12:33:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.459 12:33:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.459 12:33:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.459 12:33:10 -- scripts/common.sh@367 -- # return 0 00:05:37.459 12:33:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.459 12:33:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.459 --rc genhtml_branch_coverage=1 00:05:37.459 --rc genhtml_function_coverage=1 00:05:37.459 --rc genhtml_legend=1 00:05:37.459 --rc geninfo_all_blocks=1 00:05:37.459 --rc geninfo_unexecuted_blocks=1 00:05:37.459 00:05:37.459 ' 00:05:37.459 12:33:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.459 --rc genhtml_branch_coverage=1 00:05:37.459 --rc genhtml_function_coverage=1 00:05:37.459 --rc genhtml_legend=1 00:05:37.459 --rc geninfo_all_blocks=1 00:05:37.459 --rc geninfo_unexecuted_blocks=1 00:05:37.459 00:05:37.459 ' 00:05:37.459 12:33:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.459 --rc genhtml_branch_coverage=1 00:05:37.459 --rc genhtml_function_coverage=1 00:05:37.459 --rc genhtml_legend=1 00:05:37.459 --rc geninfo_all_blocks=1 00:05:37.459 --rc geninfo_unexecuted_blocks=1 00:05:37.459 00:05:37.459 ' 00:05:37.459 12:33:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.459 --rc genhtml_branch_coverage=1 00:05:37.459 --rc genhtml_function_coverage=1 00:05:37.459 --rc genhtml_legend=1 00:05:37.459 --rc geninfo_all_blocks=1 00:05:37.459 --rc geninfo_unexecuted_blocks=1 00:05:37.459 00:05:37.459 ' 00:05:37.459 12:33:10 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:37.459 12:33:10 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:37.459 12:33:10 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:37.459 12:33:10 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:37.459 12:33:10 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:37.459 12:33:10 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:37.459 12:33:10 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:37.459 12:33:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.459 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.459 12:33:10 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=315605 00:05:37.459 12:33:10 -- spdkcli/tcp.sh@27 -- # waitforlisten 315605 00:05:37.459 12:33:10 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.459 12:33:10 -- common/autotest_common.sh@829 -- # '[' -z 315605 ']' 00:05:37.459 12:33:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.459 12:33:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.459 12:33:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.459 12:33:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.459 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.459 [2024-11-20 12:33:10.470296] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.459 [2024-11-20 12:33:10.470374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315605 ] 00:05:37.459 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.459 [2024-11-20 12:33:10.551395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.720 [2024-11-20 12:33:10.614108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.720 [2024-11-20 12:33:10.614341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.720 [2024-11-20 12:33:10.614342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.292 12:33:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.292 12:33:11 -- common/autotest_common.sh@862 -- # return 0 00:05:38.292 12:33:11 -- spdkcli/tcp.sh@31 -- # socat_pid=316060 00:05:38.292 12:33:11 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:38.292 12:33:11 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:38.554 [ 00:05:38.554 "bdev_malloc_delete", 00:05:38.554 "bdev_malloc_create", 00:05:38.554 "bdev_null_resize", 00:05:38.554 "bdev_null_delete", 00:05:38.554 "bdev_null_create", 00:05:38.554 "bdev_nvme_cuse_unregister", 00:05:38.554 "bdev_nvme_cuse_register", 00:05:38.554 "bdev_opal_new_user", 00:05:38.554 "bdev_opal_set_lock_state", 00:05:38.554 "bdev_opal_delete", 00:05:38.554 "bdev_opal_get_info", 00:05:38.554 "bdev_opal_create", 00:05:38.554 "bdev_nvme_opal_revert", 00:05:38.554 "bdev_nvme_opal_init", 00:05:38.554 "bdev_nvme_send_cmd", 00:05:38.554 "bdev_nvme_get_path_iostat", 00:05:38.554 "bdev_nvme_get_mdns_discovery_info", 00:05:38.554 "bdev_nvme_stop_mdns_discovery", 00:05:38.554 "bdev_nvme_start_mdns_discovery", 00:05:38.554 "bdev_nvme_set_multipath_policy", 00:05:38.554 "bdev_nvme_set_preferred_path", 00:05:38.554 "bdev_nvme_get_io_paths", 00:05:38.554 "bdev_nvme_remove_error_injection", 00:05:38.554 "bdev_nvme_add_error_injection", 00:05:38.554 "bdev_nvme_get_discovery_info", 00:05:38.554 "bdev_nvme_stop_discovery", 00:05:38.554 "bdev_nvme_start_discovery", 00:05:38.554 "bdev_nvme_get_controller_health_info", 00:05:38.554 "bdev_nvme_disable_controller", 00:05:38.554 "bdev_nvme_enable_controller", 00:05:38.554 "bdev_nvme_reset_controller", 00:05:38.554 "bdev_nvme_get_transport_statistics", 00:05:38.554 "bdev_nvme_apply_firmware", 00:05:38.554 "bdev_nvme_detach_controller", 00:05:38.554 "bdev_nvme_get_controllers", 00:05:38.554 "bdev_nvme_attach_controller", 00:05:38.554 "bdev_nvme_set_hotplug", 00:05:38.554 "bdev_nvme_set_options", 00:05:38.554 "bdev_passthru_delete", 00:05:38.554 "bdev_passthru_create", 00:05:38.554 "bdev_lvol_grow_lvstore", 00:05:38.554 "bdev_lvol_get_lvols", 00:05:38.554 "bdev_lvol_get_lvstores", 00:05:38.554 "bdev_lvol_delete", 00:05:38.554 "bdev_lvol_set_read_only", 00:05:38.554 "bdev_lvol_resize", 00:05:38.554 "bdev_lvol_decouple_parent", 00:05:38.554 "bdev_lvol_inflate", 00:05:38.554 "bdev_lvol_rename", 00:05:38.554 "bdev_lvol_clone_bdev", 00:05:38.554 "bdev_lvol_clone", 00:05:38.554 "bdev_lvol_snapshot", 00:05:38.554 "bdev_lvol_create", 00:05:38.554 "bdev_lvol_delete_lvstore", 00:05:38.554 "bdev_lvol_rename_lvstore", 00:05:38.554 "bdev_lvol_create_lvstore", 00:05:38.554 "bdev_raid_set_options", 00:05:38.554 "bdev_raid_remove_base_bdev", 00:05:38.554 "bdev_raid_add_base_bdev", 00:05:38.554 "bdev_raid_delete", 00:05:38.554 "bdev_raid_create", 00:05:38.554 "bdev_raid_get_bdevs", 00:05:38.554 "bdev_error_inject_error", 00:05:38.554 "bdev_error_delete", 00:05:38.554 "bdev_error_create", 00:05:38.554 "bdev_split_delete", 00:05:38.554 "bdev_split_create", 00:05:38.554 "bdev_delay_delete", 00:05:38.554 "bdev_delay_create", 00:05:38.554 "bdev_delay_update_latency", 00:05:38.554 "bdev_zone_block_delete", 00:05:38.554 "bdev_zone_block_create", 00:05:38.554 "blobfs_create", 00:05:38.554 "blobfs_detect", 00:05:38.554 "blobfs_set_cache_size", 00:05:38.554 "bdev_aio_delete", 00:05:38.554 "bdev_aio_rescan", 00:05:38.554 "bdev_aio_create", 00:05:38.554 "bdev_ftl_set_property", 00:05:38.554 "bdev_ftl_get_properties", 00:05:38.554 "bdev_ftl_get_stats", 00:05:38.554 "bdev_ftl_unmap", 00:05:38.554 "bdev_ftl_unload", 00:05:38.554 "bdev_ftl_delete", 00:05:38.554 "bdev_ftl_load", 00:05:38.554 "bdev_ftl_create", 00:05:38.554 "bdev_virtio_attach_controller", 00:05:38.554 "bdev_virtio_scsi_get_devices", 00:05:38.554 "bdev_virtio_detach_controller", 00:05:38.554 "bdev_virtio_blk_set_hotplug", 00:05:38.554 "bdev_iscsi_delete", 00:05:38.554 "bdev_iscsi_create", 00:05:38.554 "bdev_iscsi_set_options", 00:05:38.554 "accel_error_inject_error", 00:05:38.554 "ioat_scan_accel_module", 00:05:38.554 "dsa_scan_accel_module", 00:05:38.554 "iaa_scan_accel_module", 00:05:38.554 "iscsi_set_options", 00:05:38.554 "iscsi_get_auth_groups", 00:05:38.554 "iscsi_auth_group_remove_secret", 00:05:38.554 "iscsi_auth_group_add_secret", 00:05:38.554 "iscsi_delete_auth_group", 00:05:38.554 "iscsi_create_auth_group", 00:05:38.554 "iscsi_set_discovery_auth", 00:05:38.554 "iscsi_get_options", 00:05:38.554 "iscsi_target_node_request_logout", 00:05:38.554 "iscsi_target_node_set_redirect", 00:05:38.554 "iscsi_target_node_set_auth", 00:05:38.554 "iscsi_target_node_add_lun", 00:05:38.554 "iscsi_get_connections", 00:05:38.554 "iscsi_portal_group_set_auth", 00:05:38.554 "iscsi_start_portal_group", 00:05:38.554 "iscsi_delete_portal_group", 00:05:38.554 "iscsi_create_portal_group", 00:05:38.554 "iscsi_get_portal_groups", 00:05:38.554 "iscsi_delete_target_node", 00:05:38.554 "iscsi_target_node_remove_pg_ig_maps", 00:05:38.554 "iscsi_target_node_add_pg_ig_maps", 00:05:38.554 "iscsi_create_target_node", 00:05:38.554 "iscsi_get_target_nodes", 00:05:38.554 "iscsi_delete_initiator_group", 00:05:38.554 "iscsi_initiator_group_remove_initiators", 00:05:38.554 "iscsi_initiator_group_add_initiators", 00:05:38.554 "iscsi_create_initiator_group", 00:05:38.554 "iscsi_get_initiator_groups", 00:05:38.554 "nvmf_set_crdt", 00:05:38.554 "nvmf_set_config", 00:05:38.554 "nvmf_set_max_subsystems", 00:05:38.554 "nvmf_subsystem_get_listeners", 00:05:38.554 "nvmf_subsystem_get_qpairs", 00:05:38.554 "nvmf_subsystem_get_controllers", 00:05:38.554 "nvmf_get_stats", 00:05:38.554 "nvmf_get_transports", 00:05:38.554 "nvmf_create_transport", 00:05:38.554 "nvmf_get_targets", 00:05:38.554 "nvmf_delete_target", 00:05:38.554 "nvmf_create_target", 00:05:38.554 "nvmf_subsystem_allow_any_host", 00:05:38.554 "nvmf_subsystem_remove_host", 00:05:38.554 "nvmf_subsystem_add_host", 00:05:38.554 "nvmf_subsystem_remove_ns", 00:05:38.554 "nvmf_subsystem_add_ns", 00:05:38.554 "nvmf_subsystem_listener_set_ana_state", 00:05:38.554 "nvmf_discovery_get_referrals", 00:05:38.554 "nvmf_discovery_remove_referral", 00:05:38.554 "nvmf_discovery_add_referral", 00:05:38.554 "nvmf_subsystem_remove_listener", 00:05:38.554 "nvmf_subsystem_add_listener", 00:05:38.554 "nvmf_delete_subsystem", 00:05:38.554 "nvmf_create_subsystem", 00:05:38.554 "nvmf_get_subsystems", 00:05:38.554 "env_dpdk_get_mem_stats", 00:05:38.554 "nbd_get_disks", 00:05:38.554 "nbd_stop_disk", 00:05:38.554 "nbd_start_disk", 00:05:38.554 "ublk_recover_disk", 00:05:38.554 "ublk_get_disks", 00:05:38.554 "ublk_stop_disk", 00:05:38.554 "ublk_start_disk", 00:05:38.554 "ublk_destroy_target", 00:05:38.554 "ublk_create_target", 00:05:38.554 "virtio_blk_create_transport", 00:05:38.554 "virtio_blk_get_transports", 00:05:38.554 "vhost_controller_set_coalescing", 00:05:38.554 "vhost_get_controllers", 00:05:38.554 "vhost_delete_controller", 00:05:38.554 "vhost_create_blk_controller", 00:05:38.554 "vhost_scsi_controller_remove_target", 00:05:38.554 "vhost_scsi_controller_add_target", 00:05:38.554 "vhost_start_scsi_controller", 00:05:38.554 "vhost_create_scsi_controller", 00:05:38.554 "thread_set_cpumask", 00:05:38.554 "framework_get_scheduler", 00:05:38.554 "framework_set_scheduler", 00:05:38.554 "framework_get_reactors", 00:05:38.554 "thread_get_io_channels", 00:05:38.554 "thread_get_pollers", 00:05:38.554 "thread_get_stats", 00:05:38.554 "framework_monitor_context_switch", 00:05:38.554 "spdk_kill_instance", 00:05:38.554 "log_enable_timestamps", 00:05:38.554 "log_get_flags", 00:05:38.554 "log_clear_flag", 00:05:38.554 "log_set_flag", 00:05:38.554 "log_get_level", 00:05:38.554 "log_set_level", 00:05:38.554 "log_get_print_level", 00:05:38.554 "log_set_print_level", 00:05:38.554 "framework_enable_cpumask_locks", 00:05:38.554 "framework_disable_cpumask_locks", 00:05:38.554 "framework_wait_init", 00:05:38.554 "framework_start_init", 00:05:38.554 "scsi_get_devices", 00:05:38.554 "bdev_get_histogram", 00:05:38.554 "bdev_enable_histogram", 00:05:38.554 "bdev_set_qos_limit", 00:05:38.554 "bdev_set_qd_sampling_period", 00:05:38.554 "bdev_get_bdevs", 00:05:38.554 "bdev_reset_iostat", 00:05:38.554 "bdev_get_iostat", 00:05:38.554 "bdev_examine", 00:05:38.554 "bdev_wait_for_examine", 00:05:38.554 "bdev_set_options", 00:05:38.555 "notify_get_notifications", 00:05:38.555 "notify_get_types", 00:05:38.555 "accel_get_stats", 00:05:38.555 "accel_set_options", 00:05:38.555 "accel_set_driver", 00:05:38.555 "accel_crypto_key_destroy", 00:05:38.555 "accel_crypto_keys_get", 00:05:38.555 "accel_crypto_key_create", 00:05:38.555 "accel_assign_opc", 00:05:38.555 "accel_get_module_info", 00:05:38.555 "accel_get_opc_assignments", 00:05:38.555 "vmd_rescan", 00:05:38.555 "vmd_remove_device", 00:05:38.555 "vmd_enable", 00:05:38.555 "sock_set_default_impl", 00:05:38.555 "sock_impl_set_options", 00:05:38.555 "sock_impl_get_options", 00:05:38.555 "iobuf_get_stats", 00:05:38.555 "iobuf_set_options", 00:05:38.555 "framework_get_pci_devices", 00:05:38.555 "framework_get_config", 00:05:38.555 "framework_get_subsystems", 00:05:38.555 "trace_get_info", 00:05:38.555 "trace_get_tpoint_group_mask", 00:05:38.555 "trace_disable_tpoint_group", 00:05:38.555 "trace_enable_tpoint_group", 00:05:38.555 "trace_clear_tpoint_mask", 00:05:38.555 "trace_set_tpoint_mask", 00:05:38.555 "spdk_get_version", 00:05:38.555 "rpc_get_methods" 00:05:38.555 ] 00:05:38.555 12:33:11 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:38.555 12:33:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.555 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.555 12:33:11 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:38.555 12:33:11 -- spdkcli/tcp.sh@38 -- # killprocess 315605 00:05:38.555 12:33:11 -- common/autotest_common.sh@936 -- # '[' -z 315605 ']' 00:05:38.555 12:33:11 -- common/autotest_common.sh@940 -- # kill -0 315605 00:05:38.555 12:33:11 -- common/autotest_common.sh@941 -- # uname 00:05:38.555 12:33:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.555 12:33:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 315605 00:05:38.555 12:33:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.555 12:33:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.555 12:33:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 315605' 00:05:38.555 killing process with pid 315605 00:05:38.555 12:33:11 -- common/autotest_common.sh@955 -- # kill 315605 00:05:38.555 12:33:11 -- common/autotest_common.sh@960 -- # wait 315605 00:05:38.817 00:05:38.817 real 0m1.498s 00:05:38.817 user 0m2.686s 00:05:38.817 sys 0m0.446s 00:05:38.817 12:33:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.817 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.817 ************************************ 00:05:38.817 END TEST spdkcli_tcp 00:05:38.817 ************************************ 00:05:38.817 12:33:11 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.817 12:33:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.817 12:33:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.817 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.817 ************************************ 00:05:38.817 START TEST dpdk_mem_utility 00:05:38.817 ************************************ 00:05:38.818 12:33:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.818 * Looking for test storage... 00:05:38.818 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:38.818 12:33:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.818 12:33:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.818 12:33:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.079 12:33:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:39.079 12:33:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:39.079 12:33:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:39.079 12:33:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:39.079 12:33:11 -- scripts/common.sh@335 -- # IFS=.-: 00:05:39.079 12:33:11 -- scripts/common.sh@335 -- # read -ra ver1 00:05:39.079 12:33:11 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.079 12:33:11 -- scripts/common.sh@336 -- # read -ra ver2 00:05:39.079 12:33:11 -- scripts/common.sh@337 -- # local 'op=<' 00:05:39.079 12:33:11 -- scripts/common.sh@339 -- # ver1_l=2 00:05:39.079 12:33:11 -- scripts/common.sh@340 -- # ver2_l=1 00:05:39.079 12:33:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:39.079 12:33:11 -- scripts/common.sh@343 -- # case "$op" in 00:05:39.079 12:33:11 -- scripts/common.sh@344 -- # : 1 00:05:39.079 12:33:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:39.080 12:33:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.080 12:33:11 -- scripts/common.sh@364 -- # decimal 1 00:05:39.080 12:33:11 -- scripts/common.sh@352 -- # local d=1 00:05:39.080 12:33:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.080 12:33:11 -- scripts/common.sh@354 -- # echo 1 00:05:39.080 12:33:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:39.080 12:33:11 -- scripts/common.sh@365 -- # decimal 2 00:05:39.080 12:33:11 -- scripts/common.sh@352 -- # local d=2 00:05:39.080 12:33:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.080 12:33:11 -- scripts/common.sh@354 -- # echo 2 00:05:39.080 12:33:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:39.080 12:33:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:39.080 12:33:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:39.080 12:33:11 -- scripts/common.sh@367 -- # return 0 00:05:39.080 12:33:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.080 12:33:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:39.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.080 --rc genhtml_branch_coverage=1 00:05:39.080 --rc genhtml_function_coverage=1 00:05:39.080 --rc genhtml_legend=1 00:05:39.080 --rc geninfo_all_blocks=1 00:05:39.080 --rc geninfo_unexecuted_blocks=1 00:05:39.080 00:05:39.080 ' 00:05:39.080 12:33:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:39.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.080 --rc genhtml_branch_coverage=1 00:05:39.080 --rc genhtml_function_coverage=1 00:05:39.080 --rc genhtml_legend=1 00:05:39.080 --rc geninfo_all_blocks=1 00:05:39.080 --rc geninfo_unexecuted_blocks=1 00:05:39.080 00:05:39.080 ' 00:05:39.080 12:33:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:39.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.080 --rc genhtml_branch_coverage=1 00:05:39.080 --rc genhtml_function_coverage=1 00:05:39.080 --rc genhtml_legend=1 00:05:39.080 --rc geninfo_all_blocks=1 00:05:39.080 --rc geninfo_unexecuted_blocks=1 00:05:39.080 00:05:39.080 ' 00:05:39.080 12:33:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:39.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.080 --rc genhtml_branch_coverage=1 00:05:39.080 --rc genhtml_function_coverage=1 00:05:39.080 --rc genhtml_legend=1 00:05:39.080 --rc geninfo_all_blocks=1 00:05:39.080 --rc geninfo_unexecuted_blocks=1 00:05:39.080 00:05:39.080 ' 00:05:39.080 12:33:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.080 12:33:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=316191 00:05:39.080 12:33:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 316191 00:05:39.080 12:33:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.080 12:33:11 -- common/autotest_common.sh@829 -- # '[' -z 316191 ']' 00:05:39.080 12:33:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.080 12:33:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.080 12:33:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.080 12:33:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.080 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:05:39.080 [2024-11-20 12:33:11.996930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.080 [2024-11-20 12:33:11.997011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316191 ] 00:05:39.080 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.080 [2024-11-20 12:33:12.081614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.080 [2024-11-20 12:33:12.148999] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.080 [2024-11-20 12:33:12.149122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.023 12:33:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.023 12:33:12 -- common/autotest_common.sh@862 -- # return 0 00:05:40.023 12:33:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:40.023 12:33:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:40.023 12:33:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.023 12:33:12 -- common/autotest_common.sh@10 -- # set +x 00:05:40.023 { 00:05:40.023 "filename": "/tmp/spdk_mem_dump.txt" 00:05:40.023 } 00:05:40.023 12:33:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.023 12:33:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:40.023 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:40.023 1 heaps totaling size 814.000000 MiB 00:05:40.023 size: 814.000000 MiB heap id: 0 00:05:40.023 end heaps---------- 00:05:40.023 8 mempools totaling size 598.116089 MiB 00:05:40.023 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:40.023 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:40.023 size: 84.521057 MiB name: bdev_io_316191 00:05:40.023 size: 51.011292 MiB name: evtpool_316191 00:05:40.023 size: 50.003479 MiB name: msgpool_316191 00:05:40.023 size: 21.763794 MiB name: PDU_Pool 00:05:40.023 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:40.023 size: 0.026123 MiB name: Session_Pool 00:05:40.023 end mempools------- 00:05:40.023 6 memzones totaling size 4.142822 MiB 00:05:40.023 size: 1.000366 MiB name: RG_ring_0_316191 00:05:40.023 size: 1.000366 MiB name: RG_ring_1_316191 00:05:40.023 size: 1.000366 MiB name: RG_ring_4_316191 00:05:40.023 size: 1.000366 MiB name: RG_ring_5_316191 00:05:40.023 size: 0.125366 MiB name: RG_ring_2_316191 00:05:40.023 size: 0.015991 MiB name: RG_ring_3_316191 00:05:40.023 end memzones------- 00:05:40.023 12:33:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:40.023 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:40.023 list of free elements. size: 12.519348 MiB 00:05:40.023 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:40.023 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:40.023 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:40.023 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:40.023 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:40.023 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:40.023 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:40.023 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:40.023 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:40.023 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:40.023 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:40.023 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:40.023 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:40.024 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:40.024 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:40.024 list of standard malloc elements. size: 199.218079 MiB 00:05:40.024 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:40.024 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:40.024 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:40.024 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:40.024 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:40.024 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:40.024 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:40.024 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:40.024 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:40.024 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:40.024 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:40.024 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:40.024 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:40.024 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:40.024 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:40.024 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:40.024 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:40.024 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:40.024 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:40.024 list of memzone associated elements. size: 602.262573 MiB 00:05:40.024 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:40.024 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:40.024 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:40.024 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:40.024 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:40.024 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_316191_0 00:05:40.024 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:40.024 associated memzone info: size: 48.002930 MiB name: MP_evtpool_316191_0 00:05:40.024 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:40.024 associated memzone info: size: 48.002930 MiB name: MP_msgpool_316191_0 00:05:40.024 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:40.024 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:40.024 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:40.024 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:40.024 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:40.024 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_316191 00:05:40.024 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:40.024 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_316191 00:05:40.024 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:40.024 associated memzone info: size: 1.007996 MiB name: MP_evtpool_316191 00:05:40.024 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:40.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:40.024 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:40.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:40.024 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:40.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:40.024 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:40.024 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:40.024 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:40.024 associated memzone info: size: 1.000366 MiB name: RG_ring_0_316191 00:05:40.024 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:40.024 associated memzone info: size: 1.000366 MiB name: RG_ring_1_316191 00:05:40.024 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:40.024 associated memzone info: size: 1.000366 MiB name: RG_ring_4_316191 00:05:40.024 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:40.024 associated memzone info: size: 1.000366 MiB name: RG_ring_5_316191 00:05:40.024 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:40.024 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_316191 00:05:40.024 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:40.024 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:40.024 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:40.024 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:40.024 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:40.024 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:40.024 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:40.024 associated memzone info: size: 0.125366 MiB name: RG_ring_2_316191 00:05:40.024 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:40.024 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:40.024 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:40.024 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:40.024 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:40.024 associated memzone info: size: 0.015991 MiB name: RG_ring_3_316191 00:05:40.024 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:40.024 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:40.024 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:40.024 associated memzone info: size: 0.000183 MiB name: MP_msgpool_316191 00:05:40.024 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:40.024 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_316191 00:05:40.024 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:40.024 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:40.024 12:33:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:40.024 12:33:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 316191 00:05:40.024 12:33:12 -- common/autotest_common.sh@936 -- # '[' -z 316191 ']' 00:05:40.024 12:33:12 -- common/autotest_common.sh@940 -- # kill -0 316191 00:05:40.024 12:33:12 -- common/autotest_common.sh@941 -- # uname 00:05:40.024 12:33:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.024 12:33:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 316191 00:05:40.024 12:33:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.024 12:33:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.024 12:33:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 316191' 00:05:40.024 killing process with pid 316191 00:05:40.024 12:33:12 -- common/autotest_common.sh@955 -- # kill 316191 00:05:40.024 12:33:12 -- common/autotest_common.sh@960 -- # wait 316191 00:05:40.286 00:05:40.286 real 0m1.382s 00:05:40.286 user 0m1.434s 00:05:40.286 sys 0m0.416s 00:05:40.286 12:33:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.286 12:33:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.286 ************************************ 00:05:40.286 END TEST dpdk_mem_utility 00:05:40.286 ************************************ 00:05:40.286 12:33:13 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:40.286 12:33:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.286 12:33:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.286 12:33:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.286 ************************************ 00:05:40.286 START TEST event 00:05:40.286 ************************************ 00:05:40.286 12:33:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:40.286 * Looking for test storage... 00:05:40.286 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:40.286 12:33:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:40.286 12:33:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:40.286 12:33:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:40.286 12:33:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:40.286 12:33:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:40.286 12:33:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:40.286 12:33:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:40.286 12:33:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:40.286 12:33:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:40.286 12:33:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.286 12:33:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:40.286 12:33:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:40.286 12:33:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:40.286 12:33:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:40.286 12:33:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:40.286 12:33:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:40.286 12:33:13 -- scripts/common.sh@344 -- # : 1 00:05:40.286 12:33:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:40.286 12:33:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.286 12:33:13 -- scripts/common.sh@364 -- # decimal 1 00:05:40.286 12:33:13 -- scripts/common.sh@352 -- # local d=1 00:05:40.286 12:33:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.286 12:33:13 -- scripts/common.sh@354 -- # echo 1 00:05:40.286 12:33:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:40.286 12:33:13 -- scripts/common.sh@365 -- # decimal 2 00:05:40.286 12:33:13 -- scripts/common.sh@352 -- # local d=2 00:05:40.286 12:33:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.286 12:33:13 -- scripts/common.sh@354 -- # echo 2 00:05:40.286 12:33:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:40.286 12:33:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:40.286 12:33:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:40.286 12:33:13 -- scripts/common.sh@367 -- # return 0 00:05:40.286 12:33:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.286 12:33:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:40.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.286 --rc genhtml_branch_coverage=1 00:05:40.286 --rc genhtml_function_coverage=1 00:05:40.286 --rc genhtml_legend=1 00:05:40.286 --rc geninfo_all_blocks=1 00:05:40.286 --rc geninfo_unexecuted_blocks=1 00:05:40.286 00:05:40.286 ' 00:05:40.286 12:33:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:40.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.286 --rc genhtml_branch_coverage=1 00:05:40.286 --rc genhtml_function_coverage=1 00:05:40.286 --rc genhtml_legend=1 00:05:40.286 --rc geninfo_all_blocks=1 00:05:40.286 --rc geninfo_unexecuted_blocks=1 00:05:40.286 00:05:40.286 ' 00:05:40.286 12:33:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:40.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.286 --rc genhtml_branch_coverage=1 00:05:40.286 --rc genhtml_function_coverage=1 00:05:40.286 --rc genhtml_legend=1 00:05:40.286 --rc geninfo_all_blocks=1 00:05:40.286 --rc geninfo_unexecuted_blocks=1 00:05:40.286 00:05:40.286 ' 00:05:40.286 12:33:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:40.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.286 --rc genhtml_branch_coverage=1 00:05:40.286 --rc genhtml_function_coverage=1 00:05:40.286 --rc genhtml_legend=1 00:05:40.286 --rc geninfo_all_blocks=1 00:05:40.286 --rc geninfo_unexecuted_blocks=1 00:05:40.286 00:05:40.286 ' 00:05:40.286 12:33:13 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:40.286 12:33:13 -- bdev/nbd_common.sh@6 -- # set -e 00:05:40.286 12:33:13 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.286 12:33:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:40.286 12:33:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.286 12:33:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.286 ************************************ 00:05:40.286 START TEST event_perf 00:05:40.286 ************************************ 00:05:40.286 12:33:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.547 Running I/O for 1 seconds...[2024-11-20 12:33:13.399451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.547 [2024-11-20 12:33:13.399536] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316591 ] 00:05:40.547 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.547 [2024-11-20 12:33:13.480868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.547 [2024-11-20 12:33:13.536056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.547 [2024-11-20 12:33:13.536323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.547 [2024-11-20 12:33:13.536431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.547 [2024-11-20 12:33:13.536432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.490 Running I/O for 1 seconds... 00:05:41.490 lcore 0: 174405 00:05:41.490 lcore 1: 174408 00:05:41.490 lcore 2: 174409 00:05:41.490 lcore 3: 174410 00:05:41.490 done. 00:05:41.490 00:05:41.490 real 0m1.204s 00:05:41.490 user 0m4.117s 00:05:41.490 sys 0m0.082s 00:05:41.490 12:33:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.490 12:33:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.490 ************************************ 00:05:41.490 END TEST event_perf 00:05:41.490 ************************************ 00:05:41.751 12:33:14 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.751 12:33:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:41.751 12:33:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.751 12:33:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.751 ************************************ 00:05:41.751 START TEST event_reactor 00:05:41.751 ************************************ 00:05:41.751 12:33:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.751 [2024-11-20 12:33:14.648218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.751 [2024-11-20 12:33:14.648304] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316946 ] 00:05:41.751 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.751 [2024-11-20 12:33:14.729190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.751 [2024-11-20 12:33:14.790732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.137 test_start 00:05:43.137 oneshot 00:05:43.137 tick 100 00:05:43.137 tick 100 00:05:43.137 tick 250 00:05:43.137 tick 100 00:05:43.137 tick 100 00:05:43.137 tick 100 00:05:43.137 tick 250 00:05:43.137 tick 500 00:05:43.137 tick 100 00:05:43.138 tick 100 00:05:43.138 tick 250 00:05:43.138 tick 100 00:05:43.138 tick 100 00:05:43.138 test_end 00:05:43.138 00:05:43.138 real 0m1.205s 00:05:43.138 user 0m1.112s 00:05:43.138 sys 0m0.089s 00:05:43.138 12:33:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.138 12:33:15 -- common/autotest_common.sh@10 -- # set +x 00:05:43.138 ************************************ 00:05:43.138 END TEST event_reactor 00:05:43.138 ************************************ 00:05:43.138 12:33:15 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.138 12:33:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:43.138 12:33:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.138 12:33:15 -- common/autotest_common.sh@10 -- # set +x 00:05:43.138 ************************************ 00:05:43.138 START TEST event_reactor_perf 00:05:43.138 ************************************ 00:05:43.138 12:33:15 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.138 [2024-11-20 12:33:15.898411] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.138 [2024-11-20 12:33:15.898515] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317201 ] 00:05:43.138 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.138 [2024-11-20 12:33:15.978790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.138 [2024-11-20 12:33:16.036696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.081 test_start 00:05:44.081 test_end 00:05:44.081 Performance: 535588 events per second 00:05:44.081 00:05:44.081 real 0m1.201s 00:05:44.081 user 0m1.118s 00:05:44.081 sys 0m0.079s 00:05:44.081 12:33:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.081 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.081 ************************************ 00:05:44.081 END TEST event_reactor_perf 00:05:44.081 ************************************ 00:05:44.081 12:33:17 -- event/event.sh@49 -- # uname -s 00:05:44.081 12:33:17 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:44.081 12:33:17 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.081 12:33:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.081 12:33:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.081 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.081 ************************************ 00:05:44.081 START TEST event_scheduler 00:05:44.081 ************************************ 00:05:44.081 12:33:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.343 * Looking for test storage... 00:05:44.343 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:44.343 12:33:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.343 12:33:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.343 12:33:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.343 12:33:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.343 12:33:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.343 12:33:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.343 12:33:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.343 12:33:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.343 12:33:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.343 12:33:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.343 12:33:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.343 12:33:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.343 12:33:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.343 12:33:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.343 12:33:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.343 12:33:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.343 12:33:17 -- scripts/common.sh@344 -- # : 1 00:05:44.343 12:33:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.343 12:33:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.343 12:33:17 -- scripts/common.sh@364 -- # decimal 1 00:05:44.343 12:33:17 -- scripts/common.sh@352 -- # local d=1 00:05:44.343 12:33:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.343 12:33:17 -- scripts/common.sh@354 -- # echo 1 00:05:44.343 12:33:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.343 12:33:17 -- scripts/common.sh@365 -- # decimal 2 00:05:44.343 12:33:17 -- scripts/common.sh@352 -- # local d=2 00:05:44.343 12:33:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.343 12:33:17 -- scripts/common.sh@354 -- # echo 2 00:05:44.343 12:33:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.343 12:33:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.343 12:33:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.343 12:33:17 -- scripts/common.sh@367 -- # return 0 00:05:44.343 12:33:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.343 12:33:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.343 --rc genhtml_branch_coverage=1 00:05:44.344 --rc genhtml_function_coverage=1 00:05:44.344 --rc genhtml_legend=1 00:05:44.344 --rc geninfo_all_blocks=1 00:05:44.344 --rc geninfo_unexecuted_blocks=1 00:05:44.344 00:05:44.344 ' 00:05:44.344 12:33:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.344 --rc genhtml_branch_coverage=1 00:05:44.344 --rc genhtml_function_coverage=1 00:05:44.344 --rc genhtml_legend=1 00:05:44.344 --rc geninfo_all_blocks=1 00:05:44.344 --rc geninfo_unexecuted_blocks=1 00:05:44.344 00:05:44.344 ' 00:05:44.344 12:33:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.344 --rc genhtml_branch_coverage=1 00:05:44.344 --rc genhtml_function_coverage=1 00:05:44.344 --rc genhtml_legend=1 00:05:44.344 --rc geninfo_all_blocks=1 00:05:44.344 --rc geninfo_unexecuted_blocks=1 00:05:44.344 00:05:44.344 ' 00:05:44.344 12:33:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.344 --rc genhtml_branch_coverage=1 00:05:44.344 --rc genhtml_function_coverage=1 00:05:44.344 --rc genhtml_legend=1 00:05:44.344 --rc geninfo_all_blocks=1 00:05:44.344 --rc geninfo_unexecuted_blocks=1 00:05:44.344 00:05:44.344 ' 00:05:44.344 12:33:17 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:44.344 12:33:17 -- scheduler/scheduler.sh@35 -- # scheduler_pid=317473 00:05:44.344 12:33:17 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.344 12:33:17 -- scheduler/scheduler.sh@37 -- # waitforlisten 317473 00:05:44.344 12:33:17 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:44.344 12:33:17 -- common/autotest_common.sh@829 -- # '[' -z 317473 ']' 00:05:44.344 12:33:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.344 12:33:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.344 12:33:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.344 12:33:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.344 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.344 [2024-11-20 12:33:17.369191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.344 [2024-11-20 12:33:17.369266] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317473 ] 00:05:44.344 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.606 [2024-11-20 12:33:17.450010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.606 [2024-11-20 12:33:17.542496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.606 [2024-11-20 12:33:17.542660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.606 [2024-11-20 12:33:17.542822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.606 [2024-11-20 12:33:17.542822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.177 12:33:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.177 12:33:18 -- common/autotest_common.sh@862 -- # return 0 00:05:45.177 12:33:18 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:45.177 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.177 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.177 POWER: Env isn't set yet! 00:05:45.177 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:45.177 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.177 POWER: Cannot set governor of lcore 0 to userspace 00:05:45.177 POWER: Attempting to initialise PSTAT power management... 00:05:45.177 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:45.177 POWER: Initialized successfully for lcore 0 power management 00:05:45.177 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:45.177 POWER: Initialized successfully for lcore 1 power management 00:05:45.177 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:45.177 POWER: Initialized successfully for lcore 2 power management 00:05:45.177 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:45.177 POWER: Initialized successfully for lcore 3 power management 00:05:45.177 [2024-11-20 12:33:18.207206] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:45.177 [2024-11-20 12:33:18.207218] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:45.177 [2024-11-20 12:33:18.207224] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:45.177 12:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.177 12:33:18 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:45.177 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.177 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.177 [2024-11-20 12:33:18.265028] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.177 12:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.177 12:33:18 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.177 12:33:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.177 12:33:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.177 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.177 ************************************ 00:05:45.177 START TEST scheduler_create_thread 00:05:45.177 ************************************ 00:05:45.177 12:33:18 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:45.177 12:33:18 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.177 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.178 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.439 2 00:05:45.439 12:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.439 12:33:18 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.439 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.439 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.439 3 00:05:45.439 12:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.439 12:33:18 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.439 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.439 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.439 4 00:05:45.439 12:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.439 12:33:18 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.439 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.439 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.439 5 00:05:45.439 12:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.439 12:33:18 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.439 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.439 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.439 6 00:05:45.439 12:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.439 12:33:18 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.439 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.439 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.439 7 00:05:45.439 12:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.439 12:33:18 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.439 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.439 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.439 8 00:05:45.439 12:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.439 12:33:18 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.439 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.439 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.439 9 00:05:45.439 12:33:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.439 12:33:18 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:45.439 12:33:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.439 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:46.828 10 00:05:46.828 12:33:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.828 12:33:19 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:46.828 12:33:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.828 12:33:19 -- common/autotest_common.sh@10 -- # set +x 00:05:48.216 12:33:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.216 12:33:20 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:48.216 12:33:20 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:48.216 12:33:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.216 12:33:20 -- common/autotest_common.sh@10 -- # set +x 00:05:48.790 12:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.790 12:33:21 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:48.790 12:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.790 12:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:49.362 12:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.362 12:33:22 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:49.362 12:33:22 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:49.362 12:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.362 12:33:22 -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 12:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.307 00:05:50.307 real 0m4.797s 00:05:50.307 user 0m0.023s 00:05:50.307 sys 0m0.008s 00:05:50.307 12:33:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.307 12:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:50.307 ************************************ 00:05:50.307 END TEST scheduler_create_thread 00:05:50.307 ************************************ 00:05:50.307 12:33:23 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:50.307 12:33:23 -- scheduler/scheduler.sh@46 -- # killprocess 317473 00:05:50.307 12:33:23 -- common/autotest_common.sh@936 -- # '[' -z 317473 ']' 00:05:50.307 12:33:23 -- common/autotest_common.sh@940 -- # kill -0 317473 00:05:50.307 12:33:23 -- common/autotest_common.sh@941 -- # uname 00:05:50.307 12:33:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.307 12:33:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 317473 00:05:50.307 12:33:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:50.307 12:33:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:50.307 12:33:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 317473' 00:05:50.307 killing process with pid 317473 00:05:50.307 12:33:23 -- common/autotest_common.sh@955 -- # kill 317473 00:05:50.307 12:33:23 -- common/autotest_common.sh@960 -- # wait 317473 00:05:50.307 [2024-11-20 12:33:23.350923] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.569 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:50.569 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:50.569 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:50.569 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:50.569 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:50.569 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:50.569 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:50.569 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:50.569 00:05:50.569 real 0m6.384s 00:05:50.569 user 0m14.095s 00:05:50.569 sys 0m0.379s 00:05:50.569 12:33:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.569 12:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:50.569 ************************************ 00:05:50.569 END TEST event_scheduler 00:05:50.569 ************************************ 00:05:50.569 12:33:23 -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.569 12:33:23 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.569 12:33:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.569 12:33:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.569 12:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:50.569 ************************************ 00:05:50.569 START TEST app_repeat 00:05:50.569 ************************************ 00:05:50.569 12:33:23 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:50.569 12:33:23 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.569 12:33:23 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.569 12:33:23 -- event/event.sh@13 -- # local nbd_list 00:05:50.569 12:33:23 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.569 12:33:23 -- event/event.sh@14 -- # local bdev_list 00:05:50.569 12:33:23 -- event/event.sh@15 -- # local repeat_times=4 00:05:50.569 12:33:23 -- event/event.sh@17 -- # modprobe nbd 00:05:50.569 12:33:23 -- event/event.sh@19 -- # repeat_pid=318771 00:05:50.569 12:33:23 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.569 12:33:23 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.569 12:33:23 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 318771' 00:05:50.569 Process app_repeat pid: 318771 00:05:50.569 12:33:23 -- event/event.sh@23 -- # for i in {0..2} 00:05:50.569 12:33:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.569 spdk_app_start Round 0 00:05:50.569 12:33:23 -- event/event.sh@25 -- # waitforlisten 318771 /var/tmp/spdk-nbd.sock 00:05:50.569 12:33:23 -- common/autotest_common.sh@829 -- # '[' -z 318771 ']' 00:05:50.569 12:33:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.569 12:33:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.569 12:33:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.569 12:33:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.569 12:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:50.569 [2024-11-20 12:33:23.600357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.569 [2024-11-20 12:33:23.600434] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318771 ] 00:05:50.569 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.569 [2024-11-20 12:33:23.666956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.831 [2024-11-20 12:33:23.735341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.831 [2024-11-20 12:33:23.735342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.405 12:33:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.405 12:33:24 -- common/autotest_common.sh@862 -- # return 0 00:05:51.405 12:33:24 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.666 Malloc0 00:05:51.666 12:33:24 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.666 Malloc1 00:05:51.666 12:33:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@12 -- # local i 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.666 12:33:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.927 /dev/nbd0 00:05:51.927 12:33:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.928 12:33:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.928 12:33:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:51.928 12:33:24 -- common/autotest_common.sh@867 -- # local i 00:05:51.928 12:33:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:51.928 12:33:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:51.928 12:33:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:51.928 12:33:24 -- common/autotest_common.sh@871 -- # break 00:05:51.928 12:33:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:51.928 12:33:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:51.928 12:33:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.928 1+0 records in 00:05:51.928 1+0 records out 00:05:51.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356279 s, 11.5 MB/s 00:05:51.928 12:33:24 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:51.928 12:33:24 -- common/autotest_common.sh@884 -- # size=4096 00:05:51.928 12:33:24 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:51.928 12:33:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:51.928 12:33:24 -- common/autotest_common.sh@887 -- # return 0 00:05:51.928 12:33:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.928 12:33:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.928 12:33:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.188 /dev/nbd1 00:05:52.188 12:33:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.188 12:33:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.188 12:33:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:52.188 12:33:25 -- common/autotest_common.sh@867 -- # local i 00:05:52.188 12:33:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.188 12:33:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.188 12:33:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:52.188 12:33:25 -- common/autotest_common.sh@871 -- # break 00:05:52.188 12:33:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.188 12:33:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.188 12:33:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.188 1+0 records in 00:05:52.188 1+0 records out 00:05:52.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158709 s, 25.8 MB/s 00:05:52.188 12:33:25 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:52.188 12:33:25 -- common/autotest_common.sh@884 -- # size=4096 00:05:52.188 12:33:25 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:52.188 12:33:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.188 12:33:25 -- common/autotest_common.sh@887 -- # return 0 00:05:52.188 12:33:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.188 12:33:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.189 12:33:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.189 12:33:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.189 12:33:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.189 12:33:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.189 { 00:05:52.189 "nbd_device": "/dev/nbd0", 00:05:52.189 "bdev_name": "Malloc0" 00:05:52.189 }, 00:05:52.189 { 00:05:52.189 "nbd_device": "/dev/nbd1", 00:05:52.189 "bdev_name": "Malloc1" 00:05:52.189 } 00:05:52.189 ]' 00:05:52.189 12:33:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.189 { 00:05:52.189 "nbd_device": "/dev/nbd0", 00:05:52.189 "bdev_name": "Malloc0" 00:05:52.189 }, 00:05:52.189 { 00:05:52.189 "nbd_device": "/dev/nbd1", 00:05:52.189 "bdev_name": "Malloc1" 00:05:52.189 } 00:05:52.189 ]' 00:05:52.189 12:33:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.449 /dev/nbd1' 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.449 /dev/nbd1' 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.449 256+0 records in 00:05:52.449 256+0 records out 00:05:52.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120182 s, 87.2 MB/s 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.449 256+0 records in 00:05:52.449 256+0 records out 00:05:52.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169412 s, 61.9 MB/s 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.449 256+0 records in 00:05:52.449 256+0 records out 00:05:52.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176547 s, 59.4 MB/s 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.449 12:33:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@51 -- # local i 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.450 12:33:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.710 12:33:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.710 12:33:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.710 12:33:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.710 12:33:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.710 12:33:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@41 -- # break 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@41 -- # break 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.711 12:33:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@65 -- # true 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.973 12:33:25 -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.973 12:33:25 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.234 12:33:26 -- event/event.sh@35 -- # sleep 3 00:05:53.234 [2024-11-20 12:33:26.284973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.496 [2024-11-20 12:33:26.346498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.496 [2024-11-20 12:33:26.346501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.496 [2024-11-20 12:33:26.377939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.496 [2024-11-20 12:33:26.377973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.800 12:33:29 -- event/event.sh@23 -- # for i in {0..2} 00:05:56.800 12:33:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:56.800 spdk_app_start Round 1 00:05:56.800 12:33:29 -- event/event.sh@25 -- # waitforlisten 318771 /var/tmp/spdk-nbd.sock 00:05:56.800 12:33:29 -- common/autotest_common.sh@829 -- # '[' -z 318771 ']' 00:05:56.800 12:33:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.800 12:33:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.800 12:33:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.800 12:33:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.800 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:05:56.800 12:33:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.800 12:33:29 -- common/autotest_common.sh@862 -- # return 0 00:05:56.800 12:33:29 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.800 Malloc0 00:05:56.800 12:33:29 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.800 Malloc1 00:05:56.800 12:33:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@12 -- # local i 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.800 /dev/nbd0 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.800 12:33:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:56.800 12:33:29 -- common/autotest_common.sh@867 -- # local i 00:05:56.800 12:33:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:56.800 12:33:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:56.800 12:33:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:56.800 12:33:29 -- common/autotest_common.sh@871 -- # break 00:05:56.800 12:33:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:56.800 12:33:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:56.800 12:33:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.800 1+0 records in 00:05:56.800 1+0 records out 00:05:56.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212189 s, 19.3 MB/s 00:05:56.800 12:33:29 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:56.800 12:33:29 -- common/autotest_common.sh@884 -- # size=4096 00:05:56.800 12:33:29 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:56.800 12:33:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:56.800 12:33:29 -- common/autotest_common.sh@887 -- # return 0 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.800 12:33:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.061 /dev/nbd1 00:05:57.061 12:33:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.061 12:33:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.061 12:33:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:57.061 12:33:29 -- common/autotest_common.sh@867 -- # local i 00:05:57.061 12:33:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:57.061 12:33:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:57.061 12:33:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:57.061 12:33:30 -- common/autotest_common.sh@871 -- # break 00:05:57.061 12:33:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:57.061 12:33:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:57.061 12:33:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.061 1+0 records in 00:05:57.061 1+0 records out 00:05:57.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279911 s, 14.6 MB/s 00:05:57.061 12:33:30 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.061 12:33:30 -- common/autotest_common.sh@884 -- # size=4096 00:05:57.061 12:33:30 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.061 12:33:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:57.061 12:33:30 -- common/autotest_common.sh@887 -- # return 0 00:05:57.061 12:33:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.061 12:33:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.061 12:33:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.061 12:33:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.061 12:33:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.322 { 00:05:57.322 "nbd_device": "/dev/nbd0", 00:05:57.322 "bdev_name": "Malloc0" 00:05:57.322 }, 00:05:57.322 { 00:05:57.322 "nbd_device": "/dev/nbd1", 00:05:57.322 "bdev_name": "Malloc1" 00:05:57.322 } 00:05:57.322 ]' 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.322 { 00:05:57.322 "nbd_device": "/dev/nbd0", 00:05:57.322 "bdev_name": "Malloc0" 00:05:57.322 }, 00:05:57.322 { 00:05:57.322 "nbd_device": "/dev/nbd1", 00:05:57.322 "bdev_name": "Malloc1" 00:05:57.322 } 00:05:57.322 ]' 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.322 /dev/nbd1' 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.322 /dev/nbd1' 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.322 12:33:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.323 256+0 records in 00:05:57.323 256+0 records out 00:05:57.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127502 s, 82.2 MB/s 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.323 256+0 records in 00:05:57.323 256+0 records out 00:05:57.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158004 s, 66.4 MB/s 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.323 256+0 records in 00:05:57.323 256+0 records out 00:05:57.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0414728 s, 25.3 MB/s 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@51 -- # local i 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.323 12:33:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.584 12:33:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.584 12:33:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.584 12:33:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.584 12:33:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.584 12:33:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.584 12:33:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.584 12:33:30 -- bdev/nbd_common.sh@41 -- # break 00:05:57.584 12:33:30 -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.584 12:33:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.584 12:33:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@41 -- # break 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@65 -- # true 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.846 12:33:30 -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.846 12:33:30 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.107 12:33:31 -- event/event.sh@35 -- # sleep 3 00:05:58.107 [2024-11-20 12:33:31.209560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.368 [2024-11-20 12:33:31.270497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.368 [2024-11-20 12:33:31.270498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.368 [2024-11-20 12:33:31.301948] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.368 [2024-11-20 12:33:31.301980] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.670 12:33:34 -- event/event.sh@23 -- # for i in {0..2} 00:06:01.670 12:33:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:01.670 spdk_app_start Round 2 00:06:01.670 12:33:34 -- event/event.sh@25 -- # waitforlisten 318771 /var/tmp/spdk-nbd.sock 00:06:01.670 12:33:34 -- common/autotest_common.sh@829 -- # '[' -z 318771 ']' 00:06:01.670 12:33:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.670 12:33:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.670 12:33:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.670 12:33:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.670 12:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:01.670 12:33:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.670 12:33:34 -- common/autotest_common.sh@862 -- # return 0 00:06:01.670 12:33:34 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.670 Malloc0 00:06:01.670 12:33:34 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.670 Malloc1 00:06:01.671 12:33:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@12 -- # local i 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.671 /dev/nbd0 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.671 12:33:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:01.671 12:33:34 -- common/autotest_common.sh@867 -- # local i 00:06:01.671 12:33:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.671 12:33:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.671 12:33:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:01.671 12:33:34 -- common/autotest_common.sh@871 -- # break 00:06:01.671 12:33:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.671 12:33:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.671 12:33:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.671 1+0 records in 00:06:01.671 1+0 records out 00:06:01.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023916 s, 17.1 MB/s 00:06:01.671 12:33:34 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.671 12:33:34 -- common/autotest_common.sh@884 -- # size=4096 00:06:01.671 12:33:34 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.671 12:33:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.671 12:33:34 -- common/autotest_common.sh@887 -- # return 0 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.671 12:33:34 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.932 /dev/nbd1 00:06:01.932 12:33:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.932 12:33:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.932 12:33:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:01.932 12:33:34 -- common/autotest_common.sh@867 -- # local i 00:06:01.932 12:33:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.932 12:33:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.932 12:33:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:01.932 12:33:34 -- common/autotest_common.sh@871 -- # break 00:06:01.933 12:33:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.933 12:33:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.933 12:33:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.933 1+0 records in 00:06:01.933 1+0 records out 00:06:01.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176493 s, 23.2 MB/s 00:06:01.933 12:33:34 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.933 12:33:34 -- common/autotest_common.sh@884 -- # size=4096 00:06:01.933 12:33:34 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.933 12:33:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.933 12:33:34 -- common/autotest_common.sh@887 -- # return 0 00:06:01.933 12:33:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.933 12:33:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.933 12:33:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.933 12:33:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.933 12:33:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.196 { 00:06:02.196 "nbd_device": "/dev/nbd0", 00:06:02.196 "bdev_name": "Malloc0" 00:06:02.196 }, 00:06:02.196 { 00:06:02.196 "nbd_device": "/dev/nbd1", 00:06:02.196 "bdev_name": "Malloc1" 00:06:02.196 } 00:06:02.196 ]' 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.196 { 00:06:02.196 "nbd_device": "/dev/nbd0", 00:06:02.196 "bdev_name": "Malloc0" 00:06:02.196 }, 00:06:02.196 { 00:06:02.196 "nbd_device": "/dev/nbd1", 00:06:02.196 "bdev_name": "Malloc1" 00:06:02.196 } 00:06:02.196 ]' 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.196 /dev/nbd1' 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.196 /dev/nbd1' 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.196 256+0 records in 00:06:02.196 256+0 records out 00:06:02.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127189 s, 82.4 MB/s 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.196 256+0 records in 00:06:02.196 256+0 records out 00:06:02.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183483 s, 57.1 MB/s 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.196 256+0 records in 00:06:02.196 256+0 records out 00:06:02.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162398 s, 64.6 MB/s 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.196 12:33:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@51 -- # local i 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.197 12:33:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.458 12:33:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.458 12:33:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.458 12:33:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.458 12:33:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.458 12:33:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.458 12:33:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.458 12:33:35 -- bdev/nbd_common.sh@41 -- # break 00:06:02.458 12:33:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.458 12:33:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.458 12:33:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@41 -- # break 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@65 -- # true 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.719 12:33:35 -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.719 12:33:35 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.979 12:33:35 -- event/event.sh@35 -- # sleep 3 00:06:03.239 [2024-11-20 12:33:36.101994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.239 [2024-11-20 12:33:36.163017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.239 [2024-11-20 12:33:36.163021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.239 [2024-11-20 12:33:36.194437] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.239 [2024-11-20 12:33:36.194471] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.539 12:33:38 -- event/event.sh@38 -- # waitforlisten 318771 /var/tmp/spdk-nbd.sock 00:06:06.539 12:33:38 -- common/autotest_common.sh@829 -- # '[' -z 318771 ']' 00:06:06.539 12:33:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.539 12:33:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.539 12:33:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.539 12:33:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.539 12:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:06.539 12:33:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.539 12:33:39 -- common/autotest_common.sh@862 -- # return 0 00:06:06.539 12:33:39 -- event/event.sh@39 -- # killprocess 318771 00:06:06.539 12:33:39 -- common/autotest_common.sh@936 -- # '[' -z 318771 ']' 00:06:06.539 12:33:39 -- common/autotest_common.sh@940 -- # kill -0 318771 00:06:06.539 12:33:39 -- common/autotest_common.sh@941 -- # uname 00:06:06.539 12:33:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.539 12:33:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 318771 00:06:06.539 12:33:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:06.539 12:33:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:06.539 12:33:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 318771' 00:06:06.539 killing process with pid 318771 00:06:06.539 12:33:39 -- common/autotest_common.sh@955 -- # kill 318771 00:06:06.539 12:33:39 -- common/autotest_common.sh@960 -- # wait 318771 00:06:06.539 spdk_app_start is called in Round 0. 00:06:06.539 Shutdown signal received, stop current app iteration 00:06:06.539 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:06.539 spdk_app_start is called in Round 1. 00:06:06.539 Shutdown signal received, stop current app iteration 00:06:06.539 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:06.539 spdk_app_start is called in Round 2. 00:06:06.539 Shutdown signal received, stop current app iteration 00:06:06.539 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:06.539 spdk_app_start is called in Round 3. 00:06:06.539 Shutdown signal received, stop current app iteration 00:06:06.539 12:33:39 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:06.539 12:33:39 -- event/event.sh@42 -- # return 0 00:06:06.539 00:06:06.540 real 0m15.750s 00:06:06.540 user 0m34.041s 00:06:06.540 sys 0m2.140s 00:06:06.540 12:33:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.540 12:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:06.540 ************************************ 00:06:06.540 END TEST app_repeat 00:06:06.540 ************************************ 00:06:06.540 12:33:39 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:06.540 12:33:39 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:06.540 12:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.540 12:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.540 12:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:06.540 ************************************ 00:06:06.540 START TEST cpu_locks 00:06:06.540 ************************************ 00:06:06.540 12:33:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:06.540 * Looking for test storage... 00:06:06.540 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:06.540 12:33:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:06.540 12:33:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:06.540 12:33:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:06.540 12:33:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:06.540 12:33:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:06.540 12:33:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:06.540 12:33:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:06.540 12:33:39 -- scripts/common.sh@335 -- # IFS=.-: 00:06:06.540 12:33:39 -- scripts/common.sh@335 -- # read -ra ver1 00:06:06.540 12:33:39 -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.540 12:33:39 -- scripts/common.sh@336 -- # read -ra ver2 00:06:06.540 12:33:39 -- scripts/common.sh@337 -- # local 'op=<' 00:06:06.540 12:33:39 -- scripts/common.sh@339 -- # ver1_l=2 00:06:06.540 12:33:39 -- scripts/common.sh@340 -- # ver2_l=1 00:06:06.540 12:33:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:06.540 12:33:39 -- scripts/common.sh@343 -- # case "$op" in 00:06:06.540 12:33:39 -- scripts/common.sh@344 -- # : 1 00:06:06.540 12:33:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:06.540 12:33:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.540 12:33:39 -- scripts/common.sh@364 -- # decimal 1 00:06:06.540 12:33:39 -- scripts/common.sh@352 -- # local d=1 00:06:06.540 12:33:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.540 12:33:39 -- scripts/common.sh@354 -- # echo 1 00:06:06.540 12:33:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:06.540 12:33:39 -- scripts/common.sh@365 -- # decimal 2 00:06:06.540 12:33:39 -- scripts/common.sh@352 -- # local d=2 00:06:06.540 12:33:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.540 12:33:39 -- scripts/common.sh@354 -- # echo 2 00:06:06.540 12:33:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:06.540 12:33:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:06.540 12:33:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:06.540 12:33:39 -- scripts/common.sh@367 -- # return 0 00:06:06.540 12:33:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.540 12:33:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:06.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.540 --rc genhtml_branch_coverage=1 00:06:06.540 --rc genhtml_function_coverage=1 00:06:06.540 --rc genhtml_legend=1 00:06:06.540 --rc geninfo_all_blocks=1 00:06:06.540 --rc geninfo_unexecuted_blocks=1 00:06:06.540 00:06:06.540 ' 00:06:06.540 12:33:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:06.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.540 --rc genhtml_branch_coverage=1 00:06:06.540 --rc genhtml_function_coverage=1 00:06:06.540 --rc genhtml_legend=1 00:06:06.540 --rc geninfo_all_blocks=1 00:06:06.540 --rc geninfo_unexecuted_blocks=1 00:06:06.540 00:06:06.540 ' 00:06:06.540 12:33:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:06.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.540 --rc genhtml_branch_coverage=1 00:06:06.540 --rc genhtml_function_coverage=1 00:06:06.540 --rc genhtml_legend=1 00:06:06.540 --rc geninfo_all_blocks=1 00:06:06.540 --rc geninfo_unexecuted_blocks=1 00:06:06.540 00:06:06.540 ' 00:06:06.540 12:33:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:06.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.540 --rc genhtml_branch_coverage=1 00:06:06.540 --rc genhtml_function_coverage=1 00:06:06.540 --rc genhtml_legend=1 00:06:06.540 --rc geninfo_all_blocks=1 00:06:06.540 --rc geninfo_unexecuted_blocks=1 00:06:06.540 00:06:06.540 ' 00:06:06.540 12:33:39 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:06.540 12:33:39 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:06.540 12:33:39 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:06.540 12:33:39 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:06.540 12:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.540 12:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.540 12:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:06.540 ************************************ 00:06:06.540 START TEST default_locks 00:06:06.540 ************************************ 00:06:06.540 12:33:39 -- common/autotest_common.sh@1114 -- # default_locks 00:06:06.540 12:33:39 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=322368 00:06:06.540 12:33:39 -- event/cpu_locks.sh@47 -- # waitforlisten 322368 00:06:06.540 12:33:39 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.540 12:33:39 -- common/autotest_common.sh@829 -- # '[' -z 322368 ']' 00:06:06.540 12:33:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.540 12:33:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.540 12:33:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.540 12:33:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.540 12:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:06.540 [2024-11-20 12:33:39.605327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.540 [2024-11-20 12:33:39.605384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322368 ] 00:06:06.540 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.802 [2024-11-20 12:33:39.667606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.802 [2024-11-20 12:33:39.732256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.802 [2024-11-20 12:33:39.732398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.373 12:33:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.373 12:33:40 -- common/autotest_common.sh@862 -- # return 0 00:06:07.373 12:33:40 -- event/cpu_locks.sh@49 -- # locks_exist 322368 00:06:07.373 12:33:40 -- event/cpu_locks.sh@22 -- # lslocks -p 322368 00:06:07.373 12:33:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.944 lslocks: write error 00:06:07.944 12:33:40 -- event/cpu_locks.sh@50 -- # killprocess 322368 00:06:07.944 12:33:40 -- common/autotest_common.sh@936 -- # '[' -z 322368 ']' 00:06:07.944 12:33:40 -- common/autotest_common.sh@940 -- # kill -0 322368 00:06:07.944 12:33:40 -- common/autotest_common.sh@941 -- # uname 00:06:07.944 12:33:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.944 12:33:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 322368 00:06:07.944 12:33:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.944 12:33:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.944 12:33:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 322368' 00:06:07.944 killing process with pid 322368 00:06:07.944 12:33:40 -- common/autotest_common.sh@955 -- # kill 322368 00:06:07.944 12:33:40 -- common/autotest_common.sh@960 -- # wait 322368 00:06:08.207 12:33:41 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 322368 00:06:08.207 12:33:41 -- common/autotest_common.sh@650 -- # local es=0 00:06:08.207 12:33:41 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 322368 00:06:08.207 12:33:41 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:08.207 12:33:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.207 12:33:41 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:08.207 12:33:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.207 12:33:41 -- common/autotest_common.sh@653 -- # waitforlisten 322368 00:06:08.207 12:33:41 -- common/autotest_common.sh@829 -- # '[' -z 322368 ']' 00:06:08.207 12:33:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.207 12:33:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.207 12:33:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.207 12:33:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.207 12:33:41 -- common/autotest_common.sh@10 -- # set +x 00:06:08.207 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (322368) - No such process 00:06:08.207 ERROR: process (pid: 322368) is no longer running 00:06:08.207 12:33:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.207 12:33:41 -- common/autotest_common.sh@862 -- # return 1 00:06:08.207 12:33:41 -- common/autotest_common.sh@653 -- # es=1 00:06:08.207 12:33:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.207 12:33:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.207 12:33:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.207 12:33:41 -- event/cpu_locks.sh@54 -- # no_locks 00:06:08.207 12:33:41 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:08.207 12:33:41 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:08.207 12:33:41 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:08.207 00:06:08.207 real 0m1.533s 00:06:08.207 user 0m1.661s 00:06:08.207 sys 0m0.495s 00:06:08.207 12:33:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.207 12:33:41 -- common/autotest_common.sh@10 -- # set +x 00:06:08.207 ************************************ 00:06:08.207 END TEST default_locks 00:06:08.207 ************************************ 00:06:08.207 12:33:41 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:08.207 12:33:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.207 12:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.208 12:33:41 -- common/autotest_common.sh@10 -- # set +x 00:06:08.208 ************************************ 00:06:08.208 START TEST default_locks_via_rpc 00:06:08.208 ************************************ 00:06:08.208 12:33:41 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:08.208 12:33:41 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=322659 00:06:08.208 12:33:41 -- event/cpu_locks.sh@63 -- # waitforlisten 322659 00:06:08.208 12:33:41 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.208 12:33:41 -- common/autotest_common.sh@829 -- # '[' -z 322659 ']' 00:06:08.208 12:33:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.208 12:33:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.208 12:33:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.208 12:33:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.208 12:33:41 -- common/autotest_common.sh@10 -- # set +x 00:06:08.208 [2024-11-20 12:33:41.186740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.208 [2024-11-20 12:33:41.186804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322659 ] 00:06:08.208 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.208 [2024-11-20 12:33:41.248162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.468 [2024-11-20 12:33:41.314915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.468 [2024-11-20 12:33:41.315052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.039 12:33:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.039 12:33:41 -- common/autotest_common.sh@862 -- # return 0 00:06:09.039 12:33:41 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:09.039 12:33:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.039 12:33:41 -- common/autotest_common.sh@10 -- # set +x 00:06:09.039 12:33:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.039 12:33:41 -- event/cpu_locks.sh@67 -- # no_locks 00:06:09.039 12:33:41 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.039 12:33:41 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.039 12:33:41 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.039 12:33:41 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.039 12:33:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.039 12:33:41 -- common/autotest_common.sh@10 -- # set +x 00:06:09.039 12:33:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.039 12:33:41 -- event/cpu_locks.sh@71 -- # locks_exist 322659 00:06:09.039 12:33:41 -- event/cpu_locks.sh@22 -- # lslocks -p 322659 00:06:09.039 12:33:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.299 12:33:42 -- event/cpu_locks.sh@73 -- # killprocess 322659 00:06:09.299 12:33:42 -- common/autotest_common.sh@936 -- # '[' -z 322659 ']' 00:06:09.299 12:33:42 -- common/autotest_common.sh@940 -- # kill -0 322659 00:06:09.299 12:33:42 -- common/autotest_common.sh@941 -- # uname 00:06:09.299 12:33:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.299 12:33:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 322659 00:06:09.560 12:33:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.560 12:33:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.560 12:33:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 322659' 00:06:09.560 killing process with pid 322659 00:06:09.560 12:33:42 -- common/autotest_common.sh@955 -- # kill 322659 00:06:09.560 12:33:42 -- common/autotest_common.sh@960 -- # wait 322659 00:06:09.560 00:06:09.560 real 0m1.528s 00:06:09.560 user 0m1.651s 00:06:09.560 sys 0m0.514s 00:06:09.560 12:33:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.560 12:33:42 -- common/autotest_common.sh@10 -- # set +x 00:06:09.560 ************************************ 00:06:09.560 END TEST default_locks_via_rpc 00:06:09.560 ************************************ 00:06:09.822 12:33:42 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:09.822 12:33:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.822 12:33:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.822 12:33:42 -- common/autotest_common.sh@10 -- # set +x 00:06:09.822 ************************************ 00:06:09.822 START TEST non_locking_app_on_locked_coremask 00:06:09.822 ************************************ 00:06:09.822 12:33:42 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:09.822 12:33:42 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=322975 00:06:09.822 12:33:42 -- event/cpu_locks.sh@81 -- # waitforlisten 322975 /var/tmp/spdk.sock 00:06:09.822 12:33:42 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.822 12:33:42 -- common/autotest_common.sh@829 -- # '[' -z 322975 ']' 00:06:09.822 12:33:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.822 12:33:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.822 12:33:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.822 12:33:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.822 12:33:42 -- common/autotest_common.sh@10 -- # set +x 00:06:09.822 [2024-11-20 12:33:42.757025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.822 [2024-11-20 12:33:42.757087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322975 ] 00:06:09.822 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.822 [2024-11-20 12:33:42.817235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.822 [2024-11-20 12:33:42.882275] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.822 [2024-11-20 12:33:42.882397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.764 12:33:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.764 12:33:43 -- common/autotest_common.sh@862 -- # return 0 00:06:10.764 12:33:43 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=323131 00:06:10.764 12:33:43 -- event/cpu_locks.sh@85 -- # waitforlisten 323131 /var/tmp/spdk2.sock 00:06:10.764 12:33:43 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:10.764 12:33:43 -- common/autotest_common.sh@829 -- # '[' -z 323131 ']' 00:06:10.764 12:33:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.764 12:33:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.764 12:33:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.764 12:33:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.764 12:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:10.764 [2024-11-20 12:33:43.574667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.764 [2024-11-20 12:33:43.574721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323131 ] 00:06:10.764 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.764 [2024-11-20 12:33:43.664232] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.764 [2024-11-20 12:33:43.664257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.764 [2024-11-20 12:33:43.791231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.764 [2024-11-20 12:33:43.791357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.336 12:33:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.336 12:33:44 -- common/autotest_common.sh@862 -- # return 0 00:06:11.336 12:33:44 -- event/cpu_locks.sh@87 -- # locks_exist 322975 00:06:11.336 12:33:44 -- event/cpu_locks.sh@22 -- # lslocks -p 322975 00:06:11.336 12:33:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.277 lslocks: write error 00:06:12.277 12:33:45 -- event/cpu_locks.sh@89 -- # killprocess 322975 00:06:12.277 12:33:45 -- common/autotest_common.sh@936 -- # '[' -z 322975 ']' 00:06:12.277 12:33:45 -- common/autotest_common.sh@940 -- # kill -0 322975 00:06:12.277 12:33:45 -- common/autotest_common.sh@941 -- # uname 00:06:12.277 12:33:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.277 12:33:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 322975 00:06:12.277 12:33:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.277 12:33:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.277 12:33:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 322975' 00:06:12.277 killing process with pid 322975 00:06:12.277 12:33:45 -- common/autotest_common.sh@955 -- # kill 322975 00:06:12.277 12:33:45 -- common/autotest_common.sh@960 -- # wait 322975 00:06:12.538 12:33:45 -- event/cpu_locks.sh@90 -- # killprocess 323131 00:06:12.538 12:33:45 -- common/autotest_common.sh@936 -- # '[' -z 323131 ']' 00:06:12.538 12:33:45 -- common/autotest_common.sh@940 -- # kill -0 323131 00:06:12.538 12:33:45 -- common/autotest_common.sh@941 -- # uname 00:06:12.538 12:33:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.538 12:33:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 323131 00:06:12.538 12:33:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.538 12:33:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.538 12:33:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 323131' 00:06:12.538 killing process with pid 323131 00:06:12.538 12:33:45 -- common/autotest_common.sh@955 -- # kill 323131 00:06:12.538 12:33:45 -- common/autotest_common.sh@960 -- # wait 323131 00:06:12.800 00:06:12.800 real 0m3.067s 00:06:12.800 user 0m3.366s 00:06:12.800 sys 0m0.925s 00:06:12.800 12:33:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.800 12:33:45 -- common/autotest_common.sh@10 -- # set +x 00:06:12.800 ************************************ 00:06:12.800 END TEST non_locking_app_on_locked_coremask 00:06:12.800 ************************************ 00:06:12.800 12:33:45 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:12.800 12:33:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.800 12:33:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.800 12:33:45 -- common/autotest_common.sh@10 -- # set +x 00:06:12.800 ************************************ 00:06:12.800 START TEST locking_app_on_unlocked_coremask 00:06:12.800 ************************************ 00:06:12.800 12:33:45 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:12.800 12:33:45 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=323562 00:06:12.800 12:33:45 -- event/cpu_locks.sh@99 -- # waitforlisten 323562 /var/tmp/spdk.sock 00:06:12.800 12:33:45 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:12.800 12:33:45 -- common/autotest_common.sh@829 -- # '[' -z 323562 ']' 00:06:12.800 12:33:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.800 12:33:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.800 12:33:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.800 12:33:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.800 12:33:45 -- common/autotest_common.sh@10 -- # set +x 00:06:12.800 [2024-11-20 12:33:45.870174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.800 [2024-11-20 12:33:45.870240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323562 ] 00:06:12.800 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.062 [2024-11-20 12:33:45.933645] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.062 [2024-11-20 12:33:45.933684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.062 [2024-11-20 12:33:46.000276] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.062 [2024-11-20 12:33:46.000426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.633 12:33:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.633 12:33:46 -- common/autotest_common.sh@862 -- # return 0 00:06:13.633 12:33:46 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=323844 00:06:13.633 12:33:46 -- event/cpu_locks.sh@103 -- # waitforlisten 323844 /var/tmp/spdk2.sock 00:06:13.633 12:33:46 -- common/autotest_common.sh@829 -- # '[' -z 323844 ']' 00:06:13.633 12:33:46 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.633 12:33:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.633 12:33:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.633 12:33:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.633 12:33:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.633 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:06:13.633 [2024-11-20 12:33:46.704081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.633 [2024-11-20 12:33:46.704133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323844 ] 00:06:13.633 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.894 [2024-11-20 12:33:46.795898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.894 [2024-11-20 12:33:46.923360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.894 [2024-11-20 12:33:46.923497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.465 12:33:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.465 12:33:47 -- common/autotest_common.sh@862 -- # return 0 00:06:14.465 12:33:47 -- event/cpu_locks.sh@105 -- # locks_exist 323844 00:06:14.465 12:33:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.465 12:33:47 -- event/cpu_locks.sh@22 -- # lslocks -p 323844 00:06:15.036 lslocks: write error 00:06:15.036 12:33:48 -- event/cpu_locks.sh@107 -- # killprocess 323562 00:06:15.036 12:33:48 -- common/autotest_common.sh@936 -- # '[' -z 323562 ']' 00:06:15.036 12:33:48 -- common/autotest_common.sh@940 -- # kill -0 323562 00:06:15.036 12:33:48 -- common/autotest_common.sh@941 -- # uname 00:06:15.036 12:33:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.036 12:33:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 323562 00:06:15.296 12:33:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.296 12:33:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.296 12:33:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 323562' 00:06:15.296 killing process with pid 323562 00:06:15.296 12:33:48 -- common/autotest_common.sh@955 -- # kill 323562 00:06:15.296 12:33:48 -- common/autotest_common.sh@960 -- # wait 323562 00:06:15.557 12:33:48 -- event/cpu_locks.sh@108 -- # killprocess 323844 00:06:15.557 12:33:48 -- common/autotest_common.sh@936 -- # '[' -z 323844 ']' 00:06:15.557 12:33:48 -- common/autotest_common.sh@940 -- # kill -0 323844 00:06:15.557 12:33:48 -- common/autotest_common.sh@941 -- # uname 00:06:15.557 12:33:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.557 12:33:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 323844 00:06:15.557 12:33:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.557 12:33:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.557 12:33:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 323844' 00:06:15.557 killing process with pid 323844 00:06:15.557 12:33:48 -- common/autotest_common.sh@955 -- # kill 323844 00:06:15.557 12:33:48 -- common/autotest_common.sh@960 -- # wait 323844 00:06:15.817 00:06:15.817 real 0m3.050s 00:06:15.817 user 0m3.376s 00:06:15.817 sys 0m0.906s 00:06:15.817 12:33:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.817 12:33:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.817 ************************************ 00:06:15.817 END TEST locking_app_on_unlocked_coremask 00:06:15.817 ************************************ 00:06:15.817 12:33:48 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:15.817 12:33:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.817 12:33:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.817 12:33:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.817 ************************************ 00:06:15.817 START TEST locking_app_on_locked_coremask 00:06:15.817 ************************************ 00:06:15.817 12:33:48 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:15.817 12:33:48 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=324231 00:06:15.817 12:33:48 -- event/cpu_locks.sh@116 -- # waitforlisten 324231 /var/tmp/spdk.sock 00:06:15.817 12:33:48 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.817 12:33:48 -- common/autotest_common.sh@829 -- # '[' -z 324231 ']' 00:06:15.817 12:33:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.817 12:33:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.817 12:33:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.817 12:33:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.817 12:33:48 -- common/autotest_common.sh@10 -- # set +x 00:06:16.078 [2024-11-20 12:33:48.963912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.078 [2024-11-20 12:33:48.963966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324231 ] 00:06:16.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.078 [2024-11-20 12:33:49.024269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.078 [2024-11-20 12:33:49.085391] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.078 [2024-11-20 12:33:49.085527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.649 12:33:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.649 12:33:49 -- common/autotest_common.sh@862 -- # return 0 00:06:16.649 12:33:49 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=324561 00:06:16.649 12:33:49 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 324561 /var/tmp/spdk2.sock 00:06:16.649 12:33:49 -- common/autotest_common.sh@650 -- # local es=0 00:06:16.649 12:33:49 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.649 12:33:49 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 324561 /var/tmp/spdk2.sock 00:06:16.649 12:33:49 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:16.649 12:33:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.649 12:33:49 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:16.649 12:33:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.649 12:33:49 -- common/autotest_common.sh@653 -- # waitforlisten 324561 /var/tmp/spdk2.sock 00:06:16.649 12:33:49 -- common/autotest_common.sh@829 -- # '[' -z 324561 ']' 00:06:16.649 12:33:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.649 12:33:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.650 12:33:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.650 12:33:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.650 12:33:49 -- common/autotest_common.sh@10 -- # set +x 00:06:16.910 [2024-11-20 12:33:49.801020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.910 [2024-11-20 12:33:49.801075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324561 ] 00:06:16.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.910 [2024-11-20 12:33:49.888856] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 324231 has claimed it. 00:06:16.910 [2024-11-20 12:33:49.888897] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.481 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (324561) - No such process 00:06:17.481 ERROR: process (pid: 324561) is no longer running 00:06:17.481 12:33:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.481 12:33:50 -- common/autotest_common.sh@862 -- # return 1 00:06:17.481 12:33:50 -- common/autotest_common.sh@653 -- # es=1 00:06:17.481 12:33:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.481 12:33:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.481 12:33:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.481 12:33:50 -- event/cpu_locks.sh@122 -- # locks_exist 324231 00:06:17.481 12:33:50 -- event/cpu_locks.sh@22 -- # lslocks -p 324231 00:06:17.481 12:33:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.742 lslocks: write error 00:06:17.742 12:33:50 -- event/cpu_locks.sh@124 -- # killprocess 324231 00:06:17.742 12:33:50 -- common/autotest_common.sh@936 -- # '[' -z 324231 ']' 00:06:17.742 12:33:50 -- common/autotest_common.sh@940 -- # kill -0 324231 00:06:17.742 12:33:50 -- common/autotest_common.sh@941 -- # uname 00:06:17.742 12:33:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:17.742 12:33:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 324231 00:06:18.003 12:33:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.003 12:33:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.003 12:33:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 324231' 00:06:18.003 killing process with pid 324231 00:06:18.003 12:33:50 -- common/autotest_common.sh@955 -- # kill 324231 00:06:18.003 12:33:50 -- common/autotest_common.sh@960 -- # wait 324231 00:06:18.003 00:06:18.003 real 0m2.151s 00:06:18.003 user 0m2.440s 00:06:18.003 sys 0m0.580s 00:06:18.003 12:33:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.003 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.003 ************************************ 00:06:18.003 END TEST locking_app_on_locked_coremask 00:06:18.003 ************************************ 00:06:18.003 12:33:51 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:18.003 12:33:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.003 12:33:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.003 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.003 ************************************ 00:06:18.003 START TEST locking_overlapped_coremask 00:06:18.003 ************************************ 00:06:18.265 12:33:51 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:18.265 12:33:51 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:18.265 12:33:51 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=324781 00:06:18.265 12:33:51 -- event/cpu_locks.sh@133 -- # waitforlisten 324781 /var/tmp/spdk.sock 00:06:18.265 12:33:51 -- common/autotest_common.sh@829 -- # '[' -z 324781 ']' 00:06:18.265 12:33:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.265 12:33:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.265 12:33:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.265 12:33:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.265 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.265 [2024-11-20 12:33:51.143081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.265 [2024-11-20 12:33:51.143140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324781 ] 00:06:18.265 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.265 [2024-11-20 12:33:51.202337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.265 [2024-11-20 12:33:51.265833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.265 [2024-11-20 12:33:51.266049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.265 [2024-11-20 12:33:51.266105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.265 [2024-11-20 12:33:51.266108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.837 12:33:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.837 12:33:51 -- common/autotest_common.sh@862 -- # return 0 00:06:18.837 12:33:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=324945 00:06:18.837 12:33:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 324945 /var/tmp/spdk2.sock 00:06:18.837 12:33:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:18.837 12:33:51 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:18.837 12:33:51 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 324945 /var/tmp/spdk2.sock 00:06:18.837 12:33:51 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.098 12:33:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.098 12:33:51 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.098 12:33:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.098 12:33:51 -- common/autotest_common.sh@653 -- # waitforlisten 324945 /var/tmp/spdk2.sock 00:06:19.098 12:33:51 -- common/autotest_common.sh@829 -- # '[' -z 324945 ']' 00:06:19.098 12:33:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.098 12:33:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.098 12:33:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.098 12:33:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.098 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:06:19.098 [2024-11-20 12:33:51.990889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.098 [2024-11-20 12:33:51.990939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324945 ] 00:06:19.098 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.098 [2024-11-20 12:33:52.062751] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 324781 has claimed it. 00:06:19.098 [2024-11-20 12:33:52.062781] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.713 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (324945) - No such process 00:06:19.713 ERROR: process (pid: 324945) is no longer running 00:06:19.713 12:33:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.713 12:33:52 -- common/autotest_common.sh@862 -- # return 1 00:06:19.713 12:33:52 -- common/autotest_common.sh@653 -- # es=1 00:06:19.713 12:33:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.713 12:33:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.713 12:33:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.713 12:33:52 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:19.713 12:33:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.713 12:33:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.713 12:33:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.713 12:33:52 -- event/cpu_locks.sh@141 -- # killprocess 324781 00:06:19.713 12:33:52 -- common/autotest_common.sh@936 -- # '[' -z 324781 ']' 00:06:19.713 12:33:52 -- common/autotest_common.sh@940 -- # kill -0 324781 00:06:19.713 12:33:52 -- common/autotest_common.sh@941 -- # uname 00:06:19.714 12:33:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.714 12:33:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 324781 00:06:19.714 12:33:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.714 12:33:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.714 12:33:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 324781' 00:06:19.714 killing process with pid 324781 00:06:19.714 12:33:52 -- common/autotest_common.sh@955 -- # kill 324781 00:06:19.714 12:33:52 -- common/autotest_common.sh@960 -- # wait 324781 00:06:19.976 00:06:19.976 real 0m1.772s 00:06:19.976 user 0m5.116s 00:06:19.976 sys 0m0.355s 00:06:19.976 12:33:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.976 12:33:52 -- common/autotest_common.sh@10 -- # set +x 00:06:19.976 ************************************ 00:06:19.976 END TEST locking_overlapped_coremask 00:06:19.976 ************************************ 00:06:19.976 12:33:52 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:19.976 12:33:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.976 12:33:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.976 12:33:52 -- common/autotest_common.sh@10 -- # set +x 00:06:19.976 ************************************ 00:06:19.976 START TEST locking_overlapped_coremask_via_rpc 00:06:19.976 ************************************ 00:06:19.976 12:33:52 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:19.976 12:33:52 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=325216 00:06:19.976 12:33:52 -- event/cpu_locks.sh@149 -- # waitforlisten 325216 /var/tmp/spdk.sock 00:06:19.976 12:33:52 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:19.976 12:33:52 -- common/autotest_common.sh@829 -- # '[' -z 325216 ']' 00:06:19.976 12:33:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.976 12:33:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.976 12:33:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.976 12:33:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.976 12:33:52 -- common/autotest_common.sh@10 -- # set +x 00:06:19.976 [2024-11-20 12:33:52.977499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.976 [2024-11-20 12:33:52.977556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325216 ] 00:06:19.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.976 [2024-11-20 12:33:53.040870] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.976 [2024-11-20 12:33:53.040909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.239 [2024-11-20 12:33:53.109277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.239 [2024-11-20 12:33:53.109535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.239 [2024-11-20 12:33:53.109653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.239 [2024-11-20 12:33:53.109655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.810 12:33:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.810 12:33:53 -- common/autotest_common.sh@862 -- # return 0 00:06:20.810 12:33:53 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:20.810 12:33:53 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=325322 00:06:20.810 12:33:53 -- event/cpu_locks.sh@153 -- # waitforlisten 325322 /var/tmp/spdk2.sock 00:06:20.810 12:33:53 -- common/autotest_common.sh@829 -- # '[' -z 325322 ']' 00:06:20.810 12:33:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.810 12:33:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.810 12:33:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.810 12:33:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.810 12:33:53 -- common/autotest_common.sh@10 -- # set +x 00:06:20.810 [2024-11-20 12:33:53.805704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.810 [2024-11-20 12:33:53.805755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325322 ] 00:06:20.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.810 [2024-11-20 12:33:53.878728] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.810 [2024-11-20 12:33:53.878753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.071 [2024-11-20 12:33:53.982235] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.071 [2024-11-20 12:33:53.982465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.071 [2024-11-20 12:33:53.982585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.071 [2024-11-20 12:33:53.982589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:21.642 12:33:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.642 12:33:54 -- common/autotest_common.sh@862 -- # return 0 00:06:21.642 12:33:54 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.642 12:33:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.642 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:06:21.642 12:33:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.642 12:33:54 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.642 12:33:54 -- common/autotest_common.sh@650 -- # local es=0 00:06:21.642 12:33:54 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.642 12:33:54 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:21.642 12:33:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.642 12:33:54 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:21.642 12:33:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.642 12:33:54 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.642 12:33:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.642 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:06:21.642 [2024-11-20 12:33:54.602044] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 325216 has claimed it. 00:06:21.642 request: 00:06:21.642 { 00:06:21.642 "method": "framework_enable_cpumask_locks", 00:06:21.642 "req_id": 1 00:06:21.642 } 00:06:21.642 Got JSON-RPC error response 00:06:21.642 response: 00:06:21.642 { 00:06:21.642 "code": -32603, 00:06:21.642 "message": "Failed to claim CPU core: 2" 00:06:21.642 } 00:06:21.642 12:33:54 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:21.642 12:33:54 -- common/autotest_common.sh@653 -- # es=1 00:06:21.642 12:33:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.642 12:33:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.642 12:33:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.642 12:33:54 -- event/cpu_locks.sh@158 -- # waitforlisten 325216 /var/tmp/spdk.sock 00:06:21.642 12:33:54 -- common/autotest_common.sh@829 -- # '[' -z 325216 ']' 00:06:21.642 12:33:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.642 12:33:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.642 12:33:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.642 12:33:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.642 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:06:21.903 12:33:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.903 12:33:54 -- common/autotest_common.sh@862 -- # return 0 00:06:21.903 12:33:54 -- event/cpu_locks.sh@159 -- # waitforlisten 325322 /var/tmp/spdk2.sock 00:06:21.903 12:33:54 -- common/autotest_common.sh@829 -- # '[' -z 325322 ']' 00:06:21.903 12:33:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.903 12:33:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.903 12:33:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.903 12:33:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.903 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:06:21.903 12:33:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.903 12:33:54 -- common/autotest_common.sh@862 -- # return 0 00:06:21.903 12:33:54 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:21.903 12:33:54 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.903 12:33:54 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.903 12:33:54 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.903 00:06:21.903 real 0m2.024s 00:06:21.903 user 0m0.804s 00:06:21.903 sys 0m0.145s 00:06:21.903 12:33:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.903 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:06:21.903 ************************************ 00:06:21.903 END TEST locking_overlapped_coremask_via_rpc 00:06:21.903 ************************************ 00:06:21.903 12:33:54 -- event/cpu_locks.sh@174 -- # cleanup 00:06:21.903 12:33:54 -- event/cpu_locks.sh@15 -- # [[ -z 325216 ]] 00:06:21.903 12:33:54 -- event/cpu_locks.sh@15 -- # killprocess 325216 00:06:21.903 12:33:54 -- common/autotest_common.sh@936 -- # '[' -z 325216 ']' 00:06:21.903 12:33:54 -- common/autotest_common.sh@940 -- # kill -0 325216 00:06:21.903 12:33:54 -- common/autotest_common.sh@941 -- # uname 00:06:21.903 12:33:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.903 12:33:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 325216 00:06:22.163 12:33:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:22.163 12:33:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:22.163 12:33:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 325216' 00:06:22.163 killing process with pid 325216 00:06:22.163 12:33:55 -- common/autotest_common.sh@955 -- # kill 325216 00:06:22.163 12:33:55 -- common/autotest_common.sh@960 -- # wait 325216 00:06:22.424 12:33:55 -- event/cpu_locks.sh@16 -- # [[ -z 325322 ]] 00:06:22.424 12:33:55 -- event/cpu_locks.sh@16 -- # killprocess 325322 00:06:22.424 12:33:55 -- common/autotest_common.sh@936 -- # '[' -z 325322 ']' 00:06:22.424 12:33:55 -- common/autotest_common.sh@940 -- # kill -0 325322 00:06:22.424 12:33:55 -- common/autotest_common.sh@941 -- # uname 00:06:22.424 12:33:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.424 12:33:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 325322 00:06:22.424 12:33:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:22.424 12:33:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:22.424 12:33:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 325322' 00:06:22.424 killing process with pid 325322 00:06:22.424 12:33:55 -- common/autotest_common.sh@955 -- # kill 325322 00:06:22.424 12:33:55 -- common/autotest_common.sh@960 -- # wait 325322 00:06:22.424 12:33:55 -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.686 12:33:55 -- event/cpu_locks.sh@1 -- # cleanup 00:06:22.686 12:33:55 -- event/cpu_locks.sh@15 -- # [[ -z 325216 ]] 00:06:22.686 12:33:55 -- event/cpu_locks.sh@15 -- # killprocess 325216 00:06:22.686 12:33:55 -- common/autotest_common.sh@936 -- # '[' -z 325216 ']' 00:06:22.686 12:33:55 -- common/autotest_common.sh@940 -- # kill -0 325216 00:06:22.686 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (325216) - No such process 00:06:22.686 12:33:55 -- common/autotest_common.sh@963 -- # echo 'Process with pid 325216 is not found' 00:06:22.686 Process with pid 325216 is not found 00:06:22.686 12:33:55 -- event/cpu_locks.sh@16 -- # [[ -z 325322 ]] 00:06:22.686 12:33:55 -- event/cpu_locks.sh@16 -- # killprocess 325322 00:06:22.686 12:33:55 -- common/autotest_common.sh@936 -- # '[' -z 325322 ']' 00:06:22.686 12:33:55 -- common/autotest_common.sh@940 -- # kill -0 325322 00:06:22.686 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (325322) - No such process 00:06:22.686 12:33:55 -- common/autotest_common.sh@963 -- # echo 'Process with pid 325322 is not found' 00:06:22.686 Process with pid 325322 is not found 00:06:22.686 12:33:55 -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.686 00:06:22.686 real 0m16.179s 00:06:22.686 user 0m28.216s 00:06:22.686 sys 0m4.714s 00:06:22.686 12:33:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.686 12:33:55 -- common/autotest_common.sh@10 -- # set +x 00:06:22.686 ************************************ 00:06:22.686 END TEST cpu_locks 00:06:22.686 ************************************ 00:06:22.687 00:06:22.687 real 0m42.398s 00:06:22.687 user 1m22.911s 00:06:22.687 sys 0m7.799s 00:06:22.687 12:33:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.687 12:33:55 -- common/autotest_common.sh@10 -- # set +x 00:06:22.687 ************************************ 00:06:22.687 END TEST event 00:06:22.687 ************************************ 00:06:22.687 12:33:55 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:22.687 12:33:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.687 12:33:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.687 12:33:55 -- common/autotest_common.sh@10 -- # set +x 00:06:22.687 ************************************ 00:06:22.687 START TEST thread 00:06:22.687 ************************************ 00:06:22.687 12:33:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:22.687 * Looking for test storage... 00:06:22.687 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:22.687 12:33:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:22.687 12:33:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:22.687 12:33:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:22.687 12:33:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:22.687 12:33:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:22.687 12:33:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:22.687 12:33:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:22.687 12:33:55 -- scripts/common.sh@335 -- # IFS=.-: 00:06:22.687 12:33:55 -- scripts/common.sh@335 -- # read -ra ver1 00:06:22.687 12:33:55 -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.687 12:33:55 -- scripts/common.sh@336 -- # read -ra ver2 00:06:22.687 12:33:55 -- scripts/common.sh@337 -- # local 'op=<' 00:06:22.687 12:33:55 -- scripts/common.sh@339 -- # ver1_l=2 00:06:22.687 12:33:55 -- scripts/common.sh@340 -- # ver2_l=1 00:06:22.687 12:33:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:22.687 12:33:55 -- scripts/common.sh@343 -- # case "$op" in 00:06:22.687 12:33:55 -- scripts/common.sh@344 -- # : 1 00:06:22.687 12:33:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:22.687 12:33:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.949 12:33:55 -- scripts/common.sh@364 -- # decimal 1 00:06:22.949 12:33:55 -- scripts/common.sh@352 -- # local d=1 00:06:22.949 12:33:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.949 12:33:55 -- scripts/common.sh@354 -- # echo 1 00:06:22.949 12:33:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:22.949 12:33:55 -- scripts/common.sh@365 -- # decimal 2 00:06:22.949 12:33:55 -- scripts/common.sh@352 -- # local d=2 00:06:22.949 12:33:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.949 12:33:55 -- scripts/common.sh@354 -- # echo 2 00:06:22.949 12:33:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:22.949 12:33:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:22.949 12:33:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:22.949 12:33:55 -- scripts/common.sh@367 -- # return 0 00:06:22.949 12:33:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.949 12:33:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:22.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.949 --rc genhtml_branch_coverage=1 00:06:22.949 --rc genhtml_function_coverage=1 00:06:22.949 --rc genhtml_legend=1 00:06:22.949 --rc geninfo_all_blocks=1 00:06:22.949 --rc geninfo_unexecuted_blocks=1 00:06:22.949 00:06:22.949 ' 00:06:22.949 12:33:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:22.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.949 --rc genhtml_branch_coverage=1 00:06:22.949 --rc genhtml_function_coverage=1 00:06:22.949 --rc genhtml_legend=1 00:06:22.949 --rc geninfo_all_blocks=1 00:06:22.949 --rc geninfo_unexecuted_blocks=1 00:06:22.949 00:06:22.949 ' 00:06:22.949 12:33:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:22.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.949 --rc genhtml_branch_coverage=1 00:06:22.949 --rc genhtml_function_coverage=1 00:06:22.949 --rc genhtml_legend=1 00:06:22.949 --rc geninfo_all_blocks=1 00:06:22.949 --rc geninfo_unexecuted_blocks=1 00:06:22.949 00:06:22.949 ' 00:06:22.949 12:33:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:22.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.949 --rc genhtml_branch_coverage=1 00:06:22.949 --rc genhtml_function_coverage=1 00:06:22.949 --rc genhtml_legend=1 00:06:22.949 --rc geninfo_all_blocks=1 00:06:22.949 --rc geninfo_unexecuted_blocks=1 00:06:22.949 00:06:22.949 ' 00:06:22.949 12:33:55 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.949 12:33:55 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:22.949 12:33:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.949 12:33:55 -- common/autotest_common.sh@10 -- # set +x 00:06:22.949 ************************************ 00:06:22.949 START TEST thread_poller_perf 00:06:22.949 ************************************ 00:06:22.949 12:33:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.949 [2024-11-20 12:33:55.829046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.949 [2024-11-20 12:33:55.829166] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325766 ] 00:06:22.949 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.949 [2024-11-20 12:33:55.898114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.949 [2024-11-20 12:33:55.960458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.949 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:24.335 [2024-11-20T11:33:57.443Z] ====================================== 00:06:24.335 [2024-11-20T11:33:57.443Z] busy:2410358540 (cyc) 00:06:24.335 [2024-11-20T11:33:57.443Z] total_run_count: 276000 00:06:24.335 [2024-11-20T11:33:57.443Z] tsc_hz: 2400000000 (cyc) 00:06:24.335 [2024-11-20T11:33:57.443Z] ====================================== 00:06:24.335 [2024-11-20T11:33:57.443Z] poller_cost: 8733 (cyc), 3638 (nsec) 00:06:24.335 00:06:24.335 real 0m1.208s 00:06:24.335 user 0m1.131s 00:06:24.335 sys 0m0.073s 00:06:24.335 12:33:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.335 12:33:57 -- common/autotest_common.sh@10 -- # set +x 00:06:24.335 ************************************ 00:06:24.335 END TEST thread_poller_perf 00:06:24.335 ************************************ 00:06:24.335 12:33:57 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.335 12:33:57 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:24.335 12:33:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.335 12:33:57 -- common/autotest_common.sh@10 -- # set +x 00:06:24.335 ************************************ 00:06:24.335 START TEST thread_poller_perf 00:06:24.335 ************************************ 00:06:24.335 12:33:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.335 [2024-11-20 12:33:57.088015] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.335 [2024-11-20 12:33:57.088111] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326122 ] 00:06:24.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.335 [2024-11-20 12:33:57.152421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.335 [2024-11-20 12:33:57.211993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.335 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.277 [2024-11-20T11:33:58.385Z] ====================================== 00:06:25.277 [2024-11-20T11:33:58.385Z] busy:2402587634 (cyc) 00:06:25.277 [2024-11-20T11:33:58.385Z] total_run_count: 3760000 00:06:25.277 [2024-11-20T11:33:58.385Z] tsc_hz: 2400000000 (cyc) 00:06:25.277 [2024-11-20T11:33:58.385Z] ====================================== 00:06:25.277 [2024-11-20T11:33:58.385Z] poller_cost: 638 (cyc), 265 (nsec) 00:06:25.277 00:06:25.277 real 0m1.200s 00:06:25.277 user 0m1.132s 00:06:25.277 sys 0m0.063s 00:06:25.277 12:33:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.277 12:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.277 ************************************ 00:06:25.277 END TEST thread_poller_perf 00:06:25.277 ************************************ 00:06:25.277 12:33:58 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:25.277 00:06:25.277 real 0m2.684s 00:06:25.277 user 0m2.402s 00:06:25.277 sys 0m0.299s 00:06:25.277 12:33:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.277 12:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.277 ************************************ 00:06:25.277 END TEST thread 00:06:25.277 ************************************ 00:06:25.277 12:33:58 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:25.277 12:33:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.277 12:33:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.277 12:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.277 ************************************ 00:06:25.277 START TEST accel 00:06:25.277 ************************************ 00:06:25.277 12:33:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:25.539 * Looking for test storage... 00:06:25.539 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:25.539 12:33:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:25.539 12:33:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:25.539 12:33:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:25.539 12:33:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:25.539 12:33:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:25.539 12:33:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:25.539 12:33:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:25.539 12:33:58 -- scripts/common.sh@335 -- # IFS=.-: 00:06:25.539 12:33:58 -- scripts/common.sh@335 -- # read -ra ver1 00:06:25.539 12:33:58 -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.539 12:33:58 -- scripts/common.sh@336 -- # read -ra ver2 00:06:25.539 12:33:58 -- scripts/common.sh@337 -- # local 'op=<' 00:06:25.539 12:33:58 -- scripts/common.sh@339 -- # ver1_l=2 00:06:25.539 12:33:58 -- scripts/common.sh@340 -- # ver2_l=1 00:06:25.539 12:33:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:25.539 12:33:58 -- scripts/common.sh@343 -- # case "$op" in 00:06:25.539 12:33:58 -- scripts/common.sh@344 -- # : 1 00:06:25.539 12:33:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:25.539 12:33:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.539 12:33:58 -- scripts/common.sh@364 -- # decimal 1 00:06:25.539 12:33:58 -- scripts/common.sh@352 -- # local d=1 00:06:25.539 12:33:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.539 12:33:58 -- scripts/common.sh@354 -- # echo 1 00:06:25.539 12:33:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:25.539 12:33:58 -- scripts/common.sh@365 -- # decimal 2 00:06:25.539 12:33:58 -- scripts/common.sh@352 -- # local d=2 00:06:25.539 12:33:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.539 12:33:58 -- scripts/common.sh@354 -- # echo 2 00:06:25.539 12:33:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:25.539 12:33:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:25.539 12:33:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:25.539 12:33:58 -- scripts/common.sh@367 -- # return 0 00:06:25.539 12:33:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.539 12:33:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:25.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.539 --rc genhtml_branch_coverage=1 00:06:25.539 --rc genhtml_function_coverage=1 00:06:25.539 --rc genhtml_legend=1 00:06:25.539 --rc geninfo_all_blocks=1 00:06:25.539 --rc geninfo_unexecuted_blocks=1 00:06:25.539 00:06:25.539 ' 00:06:25.539 12:33:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:25.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.539 --rc genhtml_branch_coverage=1 00:06:25.539 --rc genhtml_function_coverage=1 00:06:25.539 --rc genhtml_legend=1 00:06:25.539 --rc geninfo_all_blocks=1 00:06:25.539 --rc geninfo_unexecuted_blocks=1 00:06:25.539 00:06:25.539 ' 00:06:25.539 12:33:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:25.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.539 --rc genhtml_branch_coverage=1 00:06:25.539 --rc genhtml_function_coverage=1 00:06:25.539 --rc genhtml_legend=1 00:06:25.539 --rc geninfo_all_blocks=1 00:06:25.539 --rc geninfo_unexecuted_blocks=1 00:06:25.539 00:06:25.539 ' 00:06:25.539 12:33:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:25.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.539 --rc genhtml_branch_coverage=1 00:06:25.539 --rc genhtml_function_coverage=1 00:06:25.539 --rc genhtml_legend=1 00:06:25.539 --rc geninfo_all_blocks=1 00:06:25.539 --rc geninfo_unexecuted_blocks=1 00:06:25.539 00:06:25.539 ' 00:06:25.539 12:33:58 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:25.539 12:33:58 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:25.539 12:33:58 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:25.539 12:33:58 -- accel/accel.sh@59 -- # spdk_tgt_pid=326518 00:06:25.539 12:33:58 -- accel/accel.sh@60 -- # waitforlisten 326518 00:06:25.539 12:33:58 -- common/autotest_common.sh@829 -- # '[' -z 326518 ']' 00:06:25.539 12:33:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.539 12:33:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.539 12:33:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.539 12:33:58 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:25.539 12:33:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.539 12:33:58 -- accel/accel.sh@58 -- # build_accel_config 00:06:25.539 12:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.539 12:33:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.539 12:33:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.539 12:33:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.539 12:33:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.539 12:33:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.539 12:33:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.539 12:33:58 -- accel/accel.sh@42 -- # jq -r . 00:06:25.539 [2024-11-20 12:33:58.589203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.539 [2024-11-20 12:33:58.589278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326518 ] 00:06:25.539 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.800 [2024-11-20 12:33:58.653235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.800 [2024-11-20 12:33:58.724807] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.800 [2024-11-20 12:33:58.724942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.373 12:33:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.373 12:33:59 -- common/autotest_common.sh@862 -- # return 0 00:06:26.373 12:33:59 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:26.373 12:33:59 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:26.373 12:33:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.373 12:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.373 12:33:59 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:26.373 12:33:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.373 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.373 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.373 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.373 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.373 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.373 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.373 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.373 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.373 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.373 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.373 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.373 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.373 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.373 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.373 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.373 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.373 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.373 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.373 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.373 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.374 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.374 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.374 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.374 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.374 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.374 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.374 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.374 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.374 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.374 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.374 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.374 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.374 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.374 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.374 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.374 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.374 12:33:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # IFS== 00:06:26.374 12:33:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.374 12:33:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.374 12:33:59 -- accel/accel.sh@67 -- # killprocess 326518 00:06:26.374 12:33:59 -- common/autotest_common.sh@936 -- # '[' -z 326518 ']' 00:06:26.374 12:33:59 -- common/autotest_common.sh@940 -- # kill -0 326518 00:06:26.374 12:33:59 -- common/autotest_common.sh@941 -- # uname 00:06:26.374 12:33:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.374 12:33:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 326518 00:06:26.638 12:33:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.638 12:33:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.638 12:33:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 326518' 00:06:26.638 killing process with pid 326518 00:06:26.638 12:33:59 -- common/autotest_common.sh@955 -- # kill 326518 00:06:26.638 12:33:59 -- common/autotest_common.sh@960 -- # wait 326518 00:06:26.638 12:33:59 -- accel/accel.sh@68 -- # trap - ERR 00:06:26.638 12:33:59 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:26.638 12:33:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:26.638 12:33:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.638 12:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.638 12:33:59 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:26.638 12:33:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:26.638 12:33:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.638 12:33:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.638 12:33:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.638 12:33:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.638 12:33:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.638 12:33:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.638 12:33:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.638 12:33:59 -- accel/accel.sh@42 -- # jq -r . 00:06:26.638 12:33:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.638 12:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.898 12:33:59 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:26.898 12:33:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:26.898 12:33:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.898 12:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.898 ************************************ 00:06:26.898 START TEST accel_missing_filename 00:06:26.898 ************************************ 00:06:26.898 12:33:59 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:26.898 12:33:59 -- common/autotest_common.sh@650 -- # local es=0 00:06:26.898 12:33:59 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:26.898 12:33:59 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:26.898 12:33:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.898 12:33:59 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:26.898 12:33:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.898 12:33:59 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:26.898 12:33:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:26.898 12:33:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.898 12:33:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.898 12:33:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.898 12:33:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.898 12:33:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.898 12:33:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.898 12:33:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.898 12:33:59 -- accel/accel.sh@42 -- # jq -r . 00:06:26.898 [2024-11-20 12:33:59.801630] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.898 [2024-11-20 12:33:59.801712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326834 ] 00:06:26.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.899 [2024-11-20 12:33:59.865499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.899 [2024-11-20 12:33:59.933061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.899 [2024-11-20 12:33:59.965012] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.899 [2024-11-20 12:34:00.002501] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:27.160 A filename is required. 00:06:27.160 12:34:00 -- common/autotest_common.sh@653 -- # es=234 00:06:27.160 12:34:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.160 12:34:00 -- common/autotest_common.sh@662 -- # es=106 00:06:27.160 12:34:00 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:27.160 12:34:00 -- common/autotest_common.sh@670 -- # es=1 00:06:27.160 12:34:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.160 00:06:27.160 real 0m0.284s 00:06:27.160 user 0m0.213s 00:06:27.160 sys 0m0.111s 00:06:27.160 12:34:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.160 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:27.160 ************************************ 00:06:27.160 END TEST accel_missing_filename 00:06:27.160 ************************************ 00:06:27.160 12:34:00 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:27.160 12:34:00 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:27.160 12:34:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.160 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:27.160 ************************************ 00:06:27.160 START TEST accel_compress_verify 00:06:27.160 ************************************ 00:06:27.160 12:34:00 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:27.160 12:34:00 -- common/autotest_common.sh@650 -- # local es=0 00:06:27.160 12:34:00 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:27.160 12:34:00 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:27.160 12:34:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.160 12:34:00 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:27.160 12:34:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.160 12:34:00 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:27.160 12:34:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:27.160 12:34:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.160 12:34:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.160 12:34:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.160 12:34:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.160 12:34:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.160 12:34:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.160 12:34:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.160 12:34:00 -- accel/accel.sh@42 -- # jq -r . 00:06:27.160 [2024-11-20 12:34:00.128780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.160 [2024-11-20 12:34:00.128858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326915 ] 00:06:27.160 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.160 [2024-11-20 12:34:00.191016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.160 [2024-11-20 12:34:00.254340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.422 [2024-11-20 12:34:00.286144] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.422 [2024-11-20 12:34:00.323147] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:27.422 00:06:27.422 Compression does not support the verify option, aborting. 00:06:27.422 12:34:00 -- common/autotest_common.sh@653 -- # es=161 00:06:27.422 12:34:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.422 12:34:00 -- common/autotest_common.sh@662 -- # es=33 00:06:27.422 12:34:00 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:27.422 12:34:00 -- common/autotest_common.sh@670 -- # es=1 00:06:27.422 12:34:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.422 00:06:27.422 real 0m0.277s 00:06:27.422 user 0m0.210s 00:06:27.422 sys 0m0.110s 00:06:27.422 12:34:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.422 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:27.422 ************************************ 00:06:27.422 END TEST accel_compress_verify 00:06:27.422 ************************************ 00:06:27.422 12:34:00 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:27.422 12:34:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:27.422 12:34:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.422 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:27.422 ************************************ 00:06:27.422 START TEST accel_wrong_workload 00:06:27.422 ************************************ 00:06:27.422 12:34:00 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:27.422 12:34:00 -- common/autotest_common.sh@650 -- # local es=0 00:06:27.422 12:34:00 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:27.422 12:34:00 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:27.422 12:34:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.422 12:34:00 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:27.422 12:34:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.422 12:34:00 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:27.423 12:34:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:27.423 12:34:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.423 12:34:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.423 12:34:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.423 12:34:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.423 12:34:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.423 12:34:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.423 12:34:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.423 12:34:00 -- accel/accel.sh@42 -- # jq -r . 00:06:27.423 Unsupported workload type: foobar 00:06:27.423 [2024-11-20 12:34:00.448561] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:27.423 accel_perf options: 00:06:27.423 [-h help message] 00:06:27.423 [-q queue depth per core] 00:06:27.423 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:27.423 [-T number of threads per core 00:06:27.423 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:27.423 [-t time in seconds] 00:06:27.423 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:27.423 [ dif_verify, , dif_generate, dif_generate_copy 00:06:27.423 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:27.423 [-l for compress/decompress workloads, name of uncompressed input file 00:06:27.423 [-S for crc32c workload, use this seed value (default 0) 00:06:27.423 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:27.423 [-f for fill workload, use this BYTE value (default 255) 00:06:27.423 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:27.423 [-y verify result if this switch is on] 00:06:27.423 [-a tasks to allocate per core (default: same value as -q)] 00:06:27.423 Can be used to spread operations across a wider range of memory. 00:06:27.423 12:34:00 -- common/autotest_common.sh@653 -- # es=1 00:06:27.423 12:34:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.423 12:34:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.423 12:34:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.423 00:06:27.423 real 0m0.037s 00:06:27.423 user 0m0.023s 00:06:27.423 sys 0m0.014s 00:06:27.423 12:34:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.423 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:27.423 ************************************ 00:06:27.423 END TEST accel_wrong_workload 00:06:27.423 ************************************ 00:06:27.423 Error: writing output failed: Broken pipe 00:06:27.423 12:34:00 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:27.423 12:34:00 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:27.423 12:34:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.423 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:27.423 ************************************ 00:06:27.423 START TEST accel_negative_buffers 00:06:27.423 ************************************ 00:06:27.423 12:34:00 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:27.423 12:34:00 -- common/autotest_common.sh@650 -- # local es=0 00:06:27.423 12:34:00 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:27.423 12:34:00 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:27.423 12:34:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.423 12:34:00 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:27.423 12:34:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.423 12:34:00 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:27.423 12:34:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:27.423 12:34:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.423 12:34:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.423 12:34:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.423 12:34:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.423 12:34:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.423 12:34:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.423 12:34:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.423 12:34:00 -- accel/accel.sh@42 -- # jq -r . 00:06:27.423 -x option must be non-negative. 00:06:27.684 [2024-11-20 12:34:00.528942] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:27.684 accel_perf options: 00:06:27.684 [-h help message] 00:06:27.685 [-q queue depth per core] 00:06:27.685 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:27.685 [-T number of threads per core 00:06:27.685 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:27.685 [-t time in seconds] 00:06:27.685 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:27.685 [ dif_verify, , dif_generate, dif_generate_copy 00:06:27.685 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:27.685 [-l for compress/decompress workloads, name of uncompressed input file 00:06:27.685 [-S for crc32c workload, use this seed value (default 0) 00:06:27.685 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:27.685 [-f for fill workload, use this BYTE value (default 255) 00:06:27.685 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:27.685 [-y verify result if this switch is on] 00:06:27.685 [-a tasks to allocate per core (default: same value as -q)] 00:06:27.685 Can be used to spread operations across a wider range of memory. 00:06:27.685 12:34:00 -- common/autotest_common.sh@653 -- # es=1 00:06:27.685 12:34:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.685 12:34:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.685 12:34:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.685 00:06:27.685 real 0m0.037s 00:06:27.685 user 0m0.021s 00:06:27.685 sys 0m0.015s 00:06:27.685 12:34:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.685 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:27.685 ************************************ 00:06:27.685 END TEST accel_negative_buffers 00:06:27.685 ************************************ 00:06:27.685 Error: writing output failed: Broken pipe 00:06:27.685 12:34:00 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:27.685 12:34:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:27.685 12:34:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.685 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:27.685 ************************************ 00:06:27.685 START TEST accel_crc32c 00:06:27.685 ************************************ 00:06:27.685 12:34:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:27.685 12:34:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.685 12:34:00 -- accel/accel.sh@17 -- # local accel_module 00:06:27.685 12:34:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:27.685 12:34:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:27.685 12:34:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.685 12:34:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.685 12:34:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.685 12:34:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.685 12:34:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.685 12:34:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.685 12:34:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.685 12:34:00 -- accel/accel.sh@42 -- # jq -r . 00:06:27.685 [2024-11-20 12:34:00.608298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.685 [2024-11-20 12:34:00.608406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326973 ] 00:06:27.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.685 [2024-11-20 12:34:00.682128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.685 [2024-11-20 12:34:00.746819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.070 12:34:01 -- accel/accel.sh@18 -- # out=' 00:06:29.070 SPDK Configuration: 00:06:29.070 Core mask: 0x1 00:06:29.070 00:06:29.070 Accel Perf Configuration: 00:06:29.070 Workload Type: crc32c 00:06:29.070 CRC-32C seed: 32 00:06:29.070 Transfer size: 4096 bytes 00:06:29.070 Vector count 1 00:06:29.070 Module: software 00:06:29.070 Queue depth: 32 00:06:29.070 Allocate depth: 32 00:06:29.070 # threads/core: 1 00:06:29.070 Run time: 1 seconds 00:06:29.070 Verify: Yes 00:06:29.070 00:06:29.070 Running for 1 seconds... 00:06:29.070 00:06:29.070 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.070 ------------------------------------------------------------------------------------ 00:06:29.070 0,0 448768/s 1753 MiB/s 0 0 00:06:29.070 ==================================================================================== 00:06:29.070 Total 448768/s 1753 MiB/s 0 0' 00:06:29.070 12:34:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:29.070 12:34:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:29.070 12:34:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.070 12:34:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.070 12:34:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.070 12:34:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.070 12:34:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.070 12:34:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.070 12:34:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.070 12:34:01 -- accel/accel.sh@42 -- # jq -r . 00:06:29.070 [2024-11-20 12:34:01.899675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.070 [2024-11-20 12:34:01.899755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327307 ] 00:06:29.070 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.070 [2024-11-20 12:34:01.961244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.070 [2024-11-20 12:34:02.023322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val= 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val= 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val=0x1 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val= 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val= 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val=crc32c 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val=32 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val= 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val=software 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val=32 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val=32 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val=1 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val=Yes 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val= 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.070 12:34:02 -- accel/accel.sh@21 -- # val= 00:06:29.070 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.070 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:30.456 12:34:03 -- accel/accel.sh@21 -- # val= 00:06:30.456 12:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.456 12:34:03 -- accel/accel.sh@20 -- # IFS=: 00:06:30.456 12:34:03 -- accel/accel.sh@20 -- # read -r var val 00:06:30.456 12:34:03 -- accel/accel.sh@21 -- # val= 00:06:30.456 12:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.456 12:34:03 -- accel/accel.sh@20 -- # IFS=: 00:06:30.456 12:34:03 -- accel/accel.sh@20 -- # read -r var val 00:06:30.456 12:34:03 -- accel/accel.sh@21 -- # val= 00:06:30.456 12:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.456 12:34:03 -- accel/accel.sh@20 -- # IFS=: 00:06:30.456 12:34:03 -- accel/accel.sh@20 -- # read -r var val 00:06:30.456 12:34:03 -- accel/accel.sh@21 -- # val= 00:06:30.456 12:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.456 12:34:03 -- accel/accel.sh@20 -- # IFS=: 00:06:30.456 12:34:03 -- accel/accel.sh@20 -- # read -r var val 00:06:30.456 12:34:03 -- accel/accel.sh@21 -- # val= 00:06:30.456 12:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.457 12:34:03 -- accel/accel.sh@20 -- # IFS=: 00:06:30.457 12:34:03 -- accel/accel.sh@20 -- # read -r var val 00:06:30.457 12:34:03 -- accel/accel.sh@21 -- # val= 00:06:30.457 12:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.457 12:34:03 -- accel/accel.sh@20 -- # IFS=: 00:06:30.457 12:34:03 -- accel/accel.sh@20 -- # read -r var val 00:06:30.457 12:34:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.457 12:34:03 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:30.457 12:34:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.457 00:06:30.457 real 0m2.572s 00:06:30.457 user 0m2.366s 00:06:30.457 sys 0m0.212s 00:06:30.457 12:34:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.457 12:34:03 -- common/autotest_common.sh@10 -- # set +x 00:06:30.457 ************************************ 00:06:30.457 END TEST accel_crc32c 00:06:30.457 ************************************ 00:06:30.457 12:34:03 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:30.457 12:34:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:30.457 12:34:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.457 12:34:03 -- common/autotest_common.sh@10 -- # set +x 00:06:30.457 ************************************ 00:06:30.457 START TEST accel_crc32c_C2 00:06:30.457 ************************************ 00:06:30.457 12:34:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:30.457 12:34:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.457 12:34:03 -- accel/accel.sh@17 -- # local accel_module 00:06:30.457 12:34:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:30.457 12:34:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:30.457 12:34:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.457 12:34:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.457 12:34:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.457 12:34:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.457 12:34:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.457 12:34:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.457 12:34:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.457 12:34:03 -- accel/accel.sh@42 -- # jq -r . 00:06:30.457 [2024-11-20 12:34:03.222115] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.457 [2024-11-20 12:34:03.222201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327650 ] 00:06:30.457 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.457 [2024-11-20 12:34:03.285092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.457 [2024-11-20 12:34:03.347658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.398 12:34:04 -- accel/accel.sh@18 -- # out=' 00:06:31.398 SPDK Configuration: 00:06:31.398 Core mask: 0x1 00:06:31.398 00:06:31.398 Accel Perf Configuration: 00:06:31.398 Workload Type: crc32c 00:06:31.398 CRC-32C seed: 0 00:06:31.398 Transfer size: 4096 bytes 00:06:31.398 Vector count 2 00:06:31.398 Module: software 00:06:31.398 Queue depth: 32 00:06:31.398 Allocate depth: 32 00:06:31.398 # threads/core: 1 00:06:31.398 Run time: 1 seconds 00:06:31.398 Verify: Yes 00:06:31.398 00:06:31.398 Running for 1 seconds... 00:06:31.398 00:06:31.398 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.399 ------------------------------------------------------------------------------------ 00:06:31.399 0,0 376480/s 2941 MiB/s 0 0 00:06:31.399 ==================================================================================== 00:06:31.399 Total 376480/s 1470 MiB/s 0 0' 00:06:31.399 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.399 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.399 12:34:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:31.399 12:34:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:31.399 12:34:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.399 12:34:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.399 12:34:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.399 12:34:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.399 12:34:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.399 12:34:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.399 12:34:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.399 12:34:04 -- accel/accel.sh@42 -- # jq -r . 00:06:31.399 [2024-11-20 12:34:04.498888] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.399 [2024-11-20 12:34:04.498966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327765 ] 00:06:31.659 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.659 [2024-11-20 12:34:04.560317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.660 [2024-11-20 12:34:04.622681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val= 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val= 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val=0x1 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val= 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val= 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val=crc32c 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val=0 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val= 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val=software 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val=32 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val=32 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val=1 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val=Yes 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val= 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.660 12:34:04 -- accel/accel.sh@21 -- # val= 00:06:31.660 12:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:31.660 12:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:33.047 12:34:05 -- accel/accel.sh@21 -- # val= 00:06:33.047 12:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.047 12:34:05 -- accel/accel.sh@21 -- # val= 00:06:33.047 12:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.047 12:34:05 -- accel/accel.sh@21 -- # val= 00:06:33.047 12:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.047 12:34:05 -- accel/accel.sh@21 -- # val= 00:06:33.047 12:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.047 12:34:05 -- accel/accel.sh@21 -- # val= 00:06:33.047 12:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.047 12:34:05 -- accel/accel.sh@21 -- # val= 00:06:33.047 12:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.047 12:34:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.047 12:34:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.047 12:34:05 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:33.047 12:34:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.047 00:06:33.047 real 0m2.556s 00:06:33.047 user 0m2.360s 00:06:33.047 sys 0m0.203s 00:06:33.047 12:34:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.047 12:34:05 -- common/autotest_common.sh@10 -- # set +x 00:06:33.047 ************************************ 00:06:33.047 END TEST accel_crc32c_C2 00:06:33.047 ************************************ 00:06:33.047 12:34:05 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:33.047 12:34:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:33.047 12:34:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.047 12:34:05 -- common/autotest_common.sh@10 -- # set +x 00:06:33.047 ************************************ 00:06:33.047 START TEST accel_copy 00:06:33.047 ************************************ 00:06:33.047 12:34:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:33.047 12:34:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.047 12:34:05 -- accel/accel.sh@17 -- # local accel_module 00:06:33.047 12:34:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:33.047 12:34:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:33.047 12:34:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.047 12:34:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.047 12:34:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.047 12:34:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.047 12:34:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.047 12:34:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.047 12:34:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.047 12:34:05 -- accel/accel.sh@42 -- # jq -r . 00:06:33.047 [2024-11-20 12:34:05.821972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.047 [2024-11-20 12:34:05.822052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328034 ] 00:06:33.047 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.047 [2024-11-20 12:34:05.884089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.047 [2024-11-20 12:34:05.949242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.990 12:34:07 -- accel/accel.sh@18 -- # out=' 00:06:33.990 SPDK Configuration: 00:06:33.990 Core mask: 0x1 00:06:33.990 00:06:33.990 Accel Perf Configuration: 00:06:33.990 Workload Type: copy 00:06:33.990 Transfer size: 4096 bytes 00:06:33.990 Vector count 1 00:06:33.990 Module: software 00:06:33.990 Queue depth: 32 00:06:33.990 Allocate depth: 32 00:06:33.990 # threads/core: 1 00:06:33.990 Run time: 1 seconds 00:06:33.990 Verify: Yes 00:06:33.990 00:06:33.990 Running for 1 seconds... 00:06:33.990 00:06:33.990 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.990 ------------------------------------------------------------------------------------ 00:06:33.990 0,0 305056/s 1191 MiB/s 0 0 00:06:33.990 ==================================================================================== 00:06:33.990 Total 305056/s 1191 MiB/s 0 0' 00:06:33.990 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:33.990 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:33.990 12:34:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:33.990 12:34:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:33.990 12:34:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.990 12:34:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.990 12:34:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.990 12:34:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.990 12:34:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.990 12:34:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.990 12:34:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.990 12:34:07 -- accel/accel.sh@42 -- # jq -r . 00:06:34.251 [2024-11-20 12:34:07.102199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.251 [2024-11-20 12:34:07.102304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328368 ] 00:06:34.251 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.251 [2024-11-20 12:34:07.164313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.251 [2024-11-20 12:34:07.226542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val= 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val= 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val=0x1 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val= 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val= 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val=copy 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val= 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val=software 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val=32 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val=32 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val=1 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val=Yes 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val= 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:34.251 12:34:07 -- accel/accel.sh@21 -- # val= 00:06:34.251 12:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # IFS=: 00:06:34.251 12:34:07 -- accel/accel.sh@20 -- # read -r var val 00:06:35.636 12:34:08 -- accel/accel.sh@21 -- # val= 00:06:35.636 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:35.636 12:34:08 -- accel/accel.sh@21 -- # val= 00:06:35.636 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:35.636 12:34:08 -- accel/accel.sh@21 -- # val= 00:06:35.636 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:35.636 12:34:08 -- accel/accel.sh@21 -- # val= 00:06:35.636 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:35.636 12:34:08 -- accel/accel.sh@21 -- # val= 00:06:35.636 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:35.636 12:34:08 -- accel/accel.sh@21 -- # val= 00:06:35.636 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:35.636 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:35.636 12:34:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.636 12:34:08 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:35.636 12:34:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.636 00:06:35.636 real 0m2.562s 00:06:35.636 user 0m2.363s 00:06:35.636 sys 0m0.203s 00:06:35.636 12:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.636 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:35.636 ************************************ 00:06:35.636 END TEST accel_copy 00:06:35.636 ************************************ 00:06:35.636 12:34:08 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.636 12:34:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:35.636 12:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.636 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:35.636 ************************************ 00:06:35.636 START TEST accel_fill 00:06:35.636 ************************************ 00:06:35.636 12:34:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.636 12:34:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.637 12:34:08 -- accel/accel.sh@17 -- # local accel_module 00:06:35.637 12:34:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.637 12:34:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.637 12:34:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.637 12:34:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.637 12:34:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.637 12:34:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.637 12:34:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.637 12:34:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.637 12:34:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.637 12:34:08 -- accel/accel.sh@42 -- # jq -r . 00:06:35.637 [2024-11-20 12:34:08.426868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.637 [2024-11-20 12:34:08.426941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328725 ] 00:06:35.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.637 [2024-11-20 12:34:08.489717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.637 [2024-11-20 12:34:08.554077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.580 12:34:09 -- accel/accel.sh@18 -- # out=' 00:06:36.580 SPDK Configuration: 00:06:36.580 Core mask: 0x1 00:06:36.580 00:06:36.580 Accel Perf Configuration: 00:06:36.580 Workload Type: fill 00:06:36.580 Fill pattern: 0x80 00:06:36.580 Transfer size: 4096 bytes 00:06:36.580 Vector count 1 00:06:36.580 Module: software 00:06:36.580 Queue depth: 64 00:06:36.580 Allocate depth: 64 00:06:36.580 # threads/core: 1 00:06:36.580 Run time: 1 seconds 00:06:36.580 Verify: Yes 00:06:36.580 00:06:36.580 Running for 1 seconds... 00:06:36.580 00:06:36.580 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.580 ------------------------------------------------------------------------------------ 00:06:36.580 0,0 471872/s 1843 MiB/s 0 0 00:06:36.580 ==================================================================================== 00:06:36.580 Total 471872/s 1843 MiB/s 0 0' 00:06:36.580 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.580 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.580 12:34:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.580 12:34:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.580 12:34:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.580 12:34:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.580 12:34:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.580 12:34:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.580 12:34:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.580 12:34:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.580 12:34:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.580 12:34:09 -- accel/accel.sh@42 -- # jq -r . 00:06:36.841 [2024-11-20 12:34:09.704839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.841 [2024-11-20 12:34:09.704912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328913 ] 00:06:36.841 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.841 [2024-11-20 12:34:09.766613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.841 [2024-11-20 12:34:09.828549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val= 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val= 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val=0x1 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val= 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val= 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val=fill 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val=0x80 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val= 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val=software 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val=64 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val=64 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val=1 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val=Yes 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val= 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.841 12:34:09 -- accel/accel.sh@21 -- # val= 00:06:36.841 12:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.841 12:34:09 -- accel/accel.sh@20 -- # read -r var val 00:06:38.226 12:34:10 -- accel/accel.sh@21 -- # val= 00:06:38.226 12:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:38.226 12:34:10 -- accel/accel.sh@21 -- # val= 00:06:38.226 12:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:38.226 12:34:10 -- accel/accel.sh@21 -- # val= 00:06:38.226 12:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:38.226 12:34:10 -- accel/accel.sh@21 -- # val= 00:06:38.226 12:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:38.226 12:34:10 -- accel/accel.sh@21 -- # val= 00:06:38.226 12:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:38.226 12:34:10 -- accel/accel.sh@21 -- # val= 00:06:38.226 12:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:38.226 12:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:38.226 12:34:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.226 12:34:10 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:38.226 12:34:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.226 00:06:38.226 real 0m2.559s 00:06:38.226 user 0m2.354s 00:06:38.226 sys 0m0.213s 00:06:38.226 12:34:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.226 12:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:38.226 ************************************ 00:06:38.226 END TEST accel_fill 00:06:38.226 ************************************ 00:06:38.226 12:34:10 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:38.226 12:34:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:38.226 12:34:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.226 12:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:38.226 ************************************ 00:06:38.226 START TEST accel_copy_crc32c 00:06:38.226 ************************************ 00:06:38.226 12:34:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:38.226 12:34:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.226 12:34:10 -- accel/accel.sh@17 -- # local accel_module 00:06:38.226 12:34:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:38.226 12:34:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:38.226 12:34:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.226 12:34:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.226 12:34:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.226 12:34:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.226 12:34:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.226 12:34:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.226 12:34:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.226 12:34:11 -- accel/accel.sh@42 -- # jq -r . 00:06:38.226 [2024-11-20 12:34:11.027829] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.226 [2024-11-20 12:34:11.027903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329108 ] 00:06:38.226 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.226 [2024-11-20 12:34:11.089946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.226 [2024-11-20 12:34:11.154277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.608 12:34:12 -- accel/accel.sh@18 -- # out=' 00:06:39.608 SPDK Configuration: 00:06:39.608 Core mask: 0x1 00:06:39.608 00:06:39.608 Accel Perf Configuration: 00:06:39.608 Workload Type: copy_crc32c 00:06:39.608 CRC-32C seed: 0 00:06:39.608 Vector size: 4096 bytes 00:06:39.608 Transfer size: 4096 bytes 00:06:39.608 Vector count 1 00:06:39.608 Module: software 00:06:39.608 Queue depth: 32 00:06:39.608 Allocate depth: 32 00:06:39.608 # threads/core: 1 00:06:39.608 Run time: 1 seconds 00:06:39.608 Verify: Yes 00:06:39.608 00:06:39.608 Running for 1 seconds... 00:06:39.608 00:06:39.608 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.608 ------------------------------------------------------------------------------------ 00:06:39.608 0,0 248416/s 970 MiB/s 0 0 00:06:39.608 ==================================================================================== 00:06:39.608 Total 248416/s 970 MiB/s 0 0' 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:39.608 12:34:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:39.608 12:34:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.608 12:34:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.608 12:34:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.608 12:34:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.608 12:34:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.608 12:34:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.608 12:34:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.608 12:34:12 -- accel/accel.sh@42 -- # jq -r . 00:06:39.608 [2024-11-20 12:34:12.305771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.608 [2024-11-20 12:34:12.305844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329426 ] 00:06:39.608 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.608 [2024-11-20 12:34:12.367045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.608 [2024-11-20 12:34:12.429228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val= 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val= 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val=0x1 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val= 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val= 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val=0 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val= 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val=software 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val=32 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val=32 00:06:39.608 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.608 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.608 12:34:12 -- accel/accel.sh@21 -- # val=1 00:06:39.609 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.609 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.609 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.609 12:34:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.609 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.609 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.609 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.609 12:34:12 -- accel/accel.sh@21 -- # val=Yes 00:06:39.609 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.609 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.609 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.609 12:34:12 -- accel/accel.sh@21 -- # val= 00:06:39.609 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.609 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.609 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.609 12:34:12 -- accel/accel.sh@21 -- # val= 00:06:39.609 12:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.609 12:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.609 12:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:40.549 12:34:13 -- accel/accel.sh@21 -- # val= 00:06:40.549 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.550 12:34:13 -- accel/accel.sh@21 -- # val= 00:06:40.550 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.550 12:34:13 -- accel/accel.sh@21 -- # val= 00:06:40.550 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.550 12:34:13 -- accel/accel.sh@21 -- # val= 00:06:40.550 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.550 12:34:13 -- accel/accel.sh@21 -- # val= 00:06:40.550 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.550 12:34:13 -- accel/accel.sh@21 -- # val= 00:06:40.550 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.550 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.550 12:34:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.550 12:34:13 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:40.550 12:34:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.550 00:06:40.550 real 0m2.558s 00:06:40.550 user 0m2.368s 00:06:40.550 sys 0m0.197s 00:06:40.550 12:34:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.550 12:34:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.550 ************************************ 00:06:40.550 END TEST accel_copy_crc32c 00:06:40.550 ************************************ 00:06:40.550 12:34:13 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:40.550 12:34:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:40.550 12:34:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.550 12:34:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.550 ************************************ 00:06:40.550 START TEST accel_copy_crc32c_C2 00:06:40.550 ************************************ 00:06:40.550 12:34:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:40.550 12:34:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.550 12:34:13 -- accel/accel.sh@17 -- # local accel_module 00:06:40.550 12:34:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:40.550 12:34:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:40.550 12:34:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.550 12:34:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.550 12:34:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.550 12:34:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.550 12:34:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.550 12:34:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.550 12:34:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.550 12:34:13 -- accel/accel.sh@42 -- # jq -r . 00:06:40.550 [2024-11-20 12:34:13.629440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.550 [2024-11-20 12:34:13.629510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329781 ] 00:06:40.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.810 [2024-11-20 12:34:13.691074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.810 [2024-11-20 12:34:13.755227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.195 12:34:14 -- accel/accel.sh@18 -- # out=' 00:06:42.195 SPDK Configuration: 00:06:42.195 Core mask: 0x1 00:06:42.195 00:06:42.195 Accel Perf Configuration: 00:06:42.195 Workload Type: copy_crc32c 00:06:42.195 CRC-32C seed: 0 00:06:42.195 Vector size: 4096 bytes 00:06:42.195 Transfer size: 8192 bytes 00:06:42.195 Vector count 2 00:06:42.195 Module: software 00:06:42.195 Queue depth: 32 00:06:42.195 Allocate depth: 32 00:06:42.195 # threads/core: 1 00:06:42.195 Run time: 1 seconds 00:06:42.195 Verify: Yes 00:06:42.195 00:06:42.195 Running for 1 seconds... 00:06:42.195 00:06:42.195 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.195 ------------------------------------------------------------------------------------ 00:06:42.195 0,0 184832/s 1444 MiB/s 0 0 00:06:42.195 ==================================================================================== 00:06:42.195 Total 184832/s 722 MiB/s 0 0' 00:06:42.195 12:34:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:42.195 12:34:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:42.195 12:34:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.195 12:34:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.195 12:34:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.195 12:34:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.195 12:34:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.195 12:34:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.195 12:34:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.195 12:34:14 -- accel/accel.sh@42 -- # jq -r . 00:06:42.195 [2024-11-20 12:34:14.907613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.195 [2024-11-20 12:34:14.907716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330030 ] 00:06:42.195 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.195 [2024-11-20 12:34:14.980731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.195 [2024-11-20 12:34:15.044439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val= 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val= 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val=0x1 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val= 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val= 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val=0 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val= 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val=software 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val=32 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val=32 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val=1 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val=Yes 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val= 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:42.195 12:34:15 -- accel/accel.sh@21 -- # val= 00:06:42.195 12:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # IFS=: 00:06:42.195 12:34:15 -- accel/accel.sh@20 -- # read -r var val 00:06:43.138 12:34:16 -- accel/accel.sh@21 -- # val= 00:06:43.138 12:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # IFS=: 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # read -r var val 00:06:43.138 12:34:16 -- accel/accel.sh@21 -- # val= 00:06:43.138 12:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # IFS=: 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # read -r var val 00:06:43.138 12:34:16 -- accel/accel.sh@21 -- # val= 00:06:43.138 12:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # IFS=: 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # read -r var val 00:06:43.138 12:34:16 -- accel/accel.sh@21 -- # val= 00:06:43.138 12:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # IFS=: 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # read -r var val 00:06:43.138 12:34:16 -- accel/accel.sh@21 -- # val= 00:06:43.138 12:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # IFS=: 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # read -r var val 00:06:43.138 12:34:16 -- accel/accel.sh@21 -- # val= 00:06:43.138 12:34:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # IFS=: 00:06:43.138 12:34:16 -- accel/accel.sh@20 -- # read -r var val 00:06:43.138 12:34:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.138 12:34:16 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:43.138 12:34:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.138 00:06:43.138 real 0m2.573s 00:06:43.138 user 0m2.367s 00:06:43.138 sys 0m0.212s 00:06:43.138 12:34:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.138 12:34:16 -- common/autotest_common.sh@10 -- # set +x 00:06:43.138 ************************************ 00:06:43.138 END TEST accel_copy_crc32c_C2 00:06:43.138 ************************************ 00:06:43.138 12:34:16 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:43.138 12:34:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:43.138 12:34:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.138 12:34:16 -- common/autotest_common.sh@10 -- # set +x 00:06:43.138 ************************************ 00:06:43.138 START TEST accel_dualcast 00:06:43.138 ************************************ 00:06:43.138 12:34:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:43.138 12:34:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.138 12:34:16 -- accel/accel.sh@17 -- # local accel_module 00:06:43.138 12:34:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:43.138 12:34:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:43.138 12:34:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.138 12:34:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.138 12:34:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.138 12:34:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.138 12:34:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.138 12:34:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.138 12:34:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.138 12:34:16 -- accel/accel.sh@42 -- # jq -r . 00:06:43.138 [2024-11-20 12:34:16.244259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.138 [2024-11-20 12:34:16.244339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330209 ] 00:06:43.399 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.399 [2024-11-20 12:34:16.307179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.399 [2024-11-20 12:34:16.373167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.782 12:34:17 -- accel/accel.sh@18 -- # out=' 00:06:44.782 SPDK Configuration: 00:06:44.782 Core mask: 0x1 00:06:44.782 00:06:44.782 Accel Perf Configuration: 00:06:44.782 Workload Type: dualcast 00:06:44.782 Transfer size: 4096 bytes 00:06:44.782 Vector count 1 00:06:44.782 Module: software 00:06:44.782 Queue depth: 32 00:06:44.782 Allocate depth: 32 00:06:44.782 # threads/core: 1 00:06:44.782 Run time: 1 seconds 00:06:44.782 Verify: Yes 00:06:44.782 00:06:44.782 Running for 1 seconds... 00:06:44.782 00:06:44.782 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.782 ------------------------------------------------------------------------------------ 00:06:44.782 0,0 364512/s 1423 MiB/s 0 0 00:06:44.782 ==================================================================================== 00:06:44.782 Total 364512/s 1423 MiB/s 0 0' 00:06:44.782 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:44.783 12:34:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:44.783 12:34:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.783 12:34:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.783 12:34:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.783 12:34:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.783 12:34:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.783 12:34:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.783 12:34:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.783 12:34:17 -- accel/accel.sh@42 -- # jq -r . 00:06:44.783 [2024-11-20 12:34:17.525547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.783 [2024-11-20 12:34:17.525616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330488 ] 00:06:44.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.783 [2024-11-20 12:34:17.598546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.783 [2024-11-20 12:34:17.660942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val= 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val= 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val=0x1 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val= 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val= 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val=dualcast 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val= 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val=software 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val=32 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val=32 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val=1 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val=Yes 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val= 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.783 12:34:17 -- accel/accel.sh@21 -- # val= 00:06:44.783 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.783 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:06:45.725 12:34:18 -- accel/accel.sh@21 -- # val= 00:06:45.725 12:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # IFS=: 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # read -r var val 00:06:45.725 12:34:18 -- accel/accel.sh@21 -- # val= 00:06:45.725 12:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # IFS=: 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # read -r var val 00:06:45.725 12:34:18 -- accel/accel.sh@21 -- # val= 00:06:45.725 12:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # IFS=: 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # read -r var val 00:06:45.725 12:34:18 -- accel/accel.sh@21 -- # val= 00:06:45.725 12:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # IFS=: 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # read -r var val 00:06:45.725 12:34:18 -- accel/accel.sh@21 -- # val= 00:06:45.725 12:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # IFS=: 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # read -r var val 00:06:45.725 12:34:18 -- accel/accel.sh@21 -- # val= 00:06:45.725 12:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # IFS=: 00:06:45.725 12:34:18 -- accel/accel.sh@20 -- # read -r var val 00:06:45.725 12:34:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.725 12:34:18 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:45.725 12:34:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.725 00:06:45.725 real 0m2.572s 00:06:45.725 user 0m2.365s 00:06:45.725 sys 0m0.212s 00:06:45.725 12:34:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.725 12:34:18 -- common/autotest_common.sh@10 -- # set +x 00:06:45.725 ************************************ 00:06:45.725 END TEST accel_dualcast 00:06:45.725 ************************************ 00:06:45.725 12:34:18 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:45.725 12:34:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:45.725 12:34:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.725 12:34:18 -- common/autotest_common.sh@10 -- # set +x 00:06:45.986 ************************************ 00:06:45.986 START TEST accel_compare 00:06:45.986 ************************************ 00:06:45.986 12:34:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:45.986 12:34:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.986 12:34:18 -- accel/accel.sh@17 -- # local accel_module 00:06:45.986 12:34:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:45.986 12:34:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:45.986 12:34:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.986 12:34:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.986 12:34:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.986 12:34:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.986 12:34:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.986 12:34:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.986 12:34:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.986 12:34:18 -- accel/accel.sh@42 -- # jq -r . 00:06:45.986 [2024-11-20 12:34:18.861306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.986 [2024-11-20 12:34:18.861414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330843 ] 00:06:45.986 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.986 [2024-11-20 12:34:18.923511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.986 [2024-11-20 12:34:18.986885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.370 12:34:20 -- accel/accel.sh@18 -- # out=' 00:06:47.370 SPDK Configuration: 00:06:47.370 Core mask: 0x1 00:06:47.370 00:06:47.370 Accel Perf Configuration: 00:06:47.370 Workload Type: compare 00:06:47.370 Transfer size: 4096 bytes 00:06:47.370 Vector count 1 00:06:47.370 Module: software 00:06:47.370 Queue depth: 32 00:06:47.370 Allocate depth: 32 00:06:47.370 # threads/core: 1 00:06:47.370 Run time: 1 seconds 00:06:47.370 Verify: Yes 00:06:47.370 00:06:47.370 Running for 1 seconds... 00:06:47.370 00:06:47.370 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.370 ------------------------------------------------------------------------------------ 00:06:47.370 0,0 435040/s 1699 MiB/s 0 0 00:06:47.370 ==================================================================================== 00:06:47.370 Total 435040/s 1699 MiB/s 0 0' 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:47.370 12:34:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:47.370 12:34:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.370 12:34:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.370 12:34:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.370 12:34:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.370 12:34:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.370 12:34:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.370 12:34:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.370 12:34:20 -- accel/accel.sh@42 -- # jq -r . 00:06:47.370 [2024-11-20 12:34:20.140730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.370 [2024-11-20 12:34:20.140833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331177 ] 00:06:47.370 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.370 [2024-11-20 12:34:20.201831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.370 [2024-11-20 12:34:20.264039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val= 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val= 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val=0x1 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val= 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val= 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val=compare 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val= 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val=software 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val=32 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val=32 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val=1 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val=Yes 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val= 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:47.370 12:34:20 -- accel/accel.sh@21 -- # val= 00:06:47.370 12:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # IFS=: 00:06:47.370 12:34:20 -- accel/accel.sh@20 -- # read -r var val 00:06:48.320 12:34:21 -- accel/accel.sh@21 -- # val= 00:06:48.320 12:34:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.320 12:34:21 -- accel/accel.sh@21 -- # val= 00:06:48.320 12:34:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.320 12:34:21 -- accel/accel.sh@21 -- # val= 00:06:48.320 12:34:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.320 12:34:21 -- accel/accel.sh@21 -- # val= 00:06:48.320 12:34:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.320 12:34:21 -- accel/accel.sh@21 -- # val= 00:06:48.320 12:34:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.320 12:34:21 -- accel/accel.sh@21 -- # val= 00:06:48.320 12:34:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.320 12:34:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.320 12:34:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.320 12:34:21 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:48.320 12:34:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.320 00:06:48.320 real 0m2.560s 00:06:48.320 user 0m2.362s 00:06:48.320 sys 0m0.204s 00:06:48.320 12:34:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.320 12:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:48.320 ************************************ 00:06:48.320 END TEST accel_compare 00:06:48.320 ************************************ 00:06:48.581 12:34:21 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:48.581 12:34:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:48.581 12:34:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.581 12:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:48.581 ************************************ 00:06:48.581 START TEST accel_xor 00:06:48.581 ************************************ 00:06:48.581 12:34:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:48.581 12:34:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.581 12:34:21 -- accel/accel.sh@17 -- # local accel_module 00:06:48.581 12:34:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:48.581 12:34:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:48.581 12:34:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.581 12:34:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.581 12:34:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.581 12:34:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.581 12:34:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.581 12:34:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.581 12:34:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.581 12:34:21 -- accel/accel.sh@42 -- # jq -r . 00:06:48.581 [2024-11-20 12:34:21.463444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.581 [2024-11-20 12:34:21.463528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331343 ] 00:06:48.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.581 [2024-11-20 12:34:21.526050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.581 [2024-11-20 12:34:21.591707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.967 12:34:22 -- accel/accel.sh@18 -- # out=' 00:06:49.967 SPDK Configuration: 00:06:49.967 Core mask: 0x1 00:06:49.967 00:06:49.967 Accel Perf Configuration: 00:06:49.967 Workload Type: xor 00:06:49.967 Source buffers: 2 00:06:49.967 Transfer size: 4096 bytes 00:06:49.967 Vector count 1 00:06:49.967 Module: software 00:06:49.967 Queue depth: 32 00:06:49.967 Allocate depth: 32 00:06:49.967 # threads/core: 1 00:06:49.967 Run time: 1 seconds 00:06:49.967 Verify: Yes 00:06:49.967 00:06:49.967 Running for 1 seconds... 00:06:49.967 00:06:49.967 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.967 ------------------------------------------------------------------------------------ 00:06:49.967 0,0 355616/s 1389 MiB/s 0 0 00:06:49.967 ==================================================================================== 00:06:49.967 Total 355616/s 1389 MiB/s 0 0' 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.967 12:34:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:49.967 12:34:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:49.967 12:34:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.967 12:34:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.967 12:34:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.967 12:34:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.967 12:34:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.967 12:34:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.967 12:34:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.967 12:34:22 -- accel/accel.sh@42 -- # jq -r . 00:06:49.967 [2024-11-20 12:34:22.744346] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.967 [2024-11-20 12:34:22.744446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331551 ] 00:06:49.967 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.967 [2024-11-20 12:34:22.805976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.967 [2024-11-20 12:34:22.868333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.967 12:34:22 -- accel/accel.sh@21 -- # val= 00:06:49.967 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.967 12:34:22 -- accel/accel.sh@21 -- # val= 00:06:49.967 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.967 12:34:22 -- accel/accel.sh@21 -- # val=0x1 00:06:49.967 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.967 12:34:22 -- accel/accel.sh@21 -- # val= 00:06:49.967 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.967 12:34:22 -- accel/accel.sh@21 -- # val= 00:06:49.967 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.967 12:34:22 -- accel/accel.sh@21 -- # val=xor 00:06:49.967 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.967 12:34:22 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.967 12:34:22 -- accel/accel.sh@21 -- # val=2 00:06:49.967 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.967 12:34:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.967 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.967 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.968 12:34:22 -- accel/accel.sh@21 -- # val= 00:06:49.968 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.968 12:34:22 -- accel/accel.sh@21 -- # val=software 00:06:49.968 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.968 12:34:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.968 12:34:22 -- accel/accel.sh@21 -- # val=32 00:06:49.968 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.968 12:34:22 -- accel/accel.sh@21 -- # val=32 00:06:49.968 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.968 12:34:22 -- accel/accel.sh@21 -- # val=1 00:06:49.968 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.968 12:34:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.968 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.968 12:34:22 -- accel/accel.sh@21 -- # val=Yes 00:06:49.968 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.968 12:34:22 -- accel/accel.sh@21 -- # val= 00:06:49.968 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:49.968 12:34:22 -- accel/accel.sh@21 -- # val= 00:06:49.968 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:49.968 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.908 12:34:23 -- accel/accel.sh@21 -- # val= 00:06:50.908 12:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:50.908 12:34:23 -- accel/accel.sh@21 -- # val= 00:06:50.908 12:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:50.908 12:34:23 -- accel/accel.sh@21 -- # val= 00:06:50.908 12:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:50.908 12:34:23 -- accel/accel.sh@21 -- # val= 00:06:50.908 12:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:50.908 12:34:23 -- accel/accel.sh@21 -- # val= 00:06:50.908 12:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:50.908 12:34:23 -- accel/accel.sh@21 -- # val= 00:06:50.908 12:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:50.908 12:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:50.908 12:34:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.908 12:34:23 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:50.908 12:34:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.908 00:06:50.908 real 0m2.561s 00:06:50.908 user 0m2.373s 00:06:50.908 sys 0m0.195s 00:06:50.908 12:34:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.908 12:34:23 -- common/autotest_common.sh@10 -- # set +x 00:06:50.908 ************************************ 00:06:50.908 END TEST accel_xor 00:06:50.908 ************************************ 00:06:51.168 12:34:24 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:51.168 12:34:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:51.168 12:34:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.168 12:34:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.168 ************************************ 00:06:51.168 START TEST accel_xor 00:06:51.168 ************************************ 00:06:51.168 12:34:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:51.168 12:34:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.168 12:34:24 -- accel/accel.sh@17 -- # local accel_module 00:06:51.168 12:34:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:51.168 12:34:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:51.168 12:34:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.168 12:34:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.168 12:34:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.168 12:34:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.168 12:34:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.168 12:34:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.168 12:34:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.168 12:34:24 -- accel/accel.sh@42 -- # jq -r . 00:06:51.168 [2024-11-20 12:34:24.068291] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.169 [2024-11-20 12:34:24.068395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331905 ] 00:06:51.169 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.169 [2024-11-20 12:34:24.131146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.169 [2024-11-20 12:34:24.195648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.553 12:34:25 -- accel/accel.sh@18 -- # out=' 00:06:52.553 SPDK Configuration: 00:06:52.553 Core mask: 0x1 00:06:52.553 00:06:52.553 Accel Perf Configuration: 00:06:52.553 Workload Type: xor 00:06:52.553 Source buffers: 3 00:06:52.553 Transfer size: 4096 bytes 00:06:52.553 Vector count 1 00:06:52.553 Module: software 00:06:52.553 Queue depth: 32 00:06:52.553 Allocate depth: 32 00:06:52.553 # threads/core: 1 00:06:52.553 Run time: 1 seconds 00:06:52.553 Verify: Yes 00:06:52.553 00:06:52.553 Running for 1 seconds... 00:06:52.553 00:06:52.553 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.553 ------------------------------------------------------------------------------------ 00:06:52.553 0,0 344736/s 1346 MiB/s 0 0 00:06:52.553 ==================================================================================== 00:06:52.553 Total 344736/s 1346 MiB/s 0 0' 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:52.553 12:34:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:52.553 12:34:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.553 12:34:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.553 12:34:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.553 12:34:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.553 12:34:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.553 12:34:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.553 12:34:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.553 12:34:25 -- accel/accel.sh@42 -- # jq -r . 00:06:52.553 [2024-11-20 12:34:25.346685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.553 [2024-11-20 12:34:25.346761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332239 ] 00:06:52.553 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.553 [2024-11-20 12:34:25.408425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.553 [2024-11-20 12:34:25.470237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val= 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val= 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val=0x1 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val= 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val= 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val=xor 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val=3 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val= 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val=software 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val=32 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val=32 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val=1 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val=Yes 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val= 00:06:52.553 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.553 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:52.553 12:34:25 -- accel/accel.sh@21 -- # val= 00:06:52.554 12:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.554 12:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:52.554 12:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 12:34:26 -- accel/accel.sh@21 -- # val= 00:06:53.496 12:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 12:34:26 -- accel/accel.sh@21 -- # val= 00:06:53.496 12:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 12:34:26 -- accel/accel.sh@21 -- # val= 00:06:53.496 12:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 12:34:26 -- accel/accel.sh@21 -- # val= 00:06:53.496 12:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 12:34:26 -- accel/accel.sh@21 -- # val= 00:06:53.496 12:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 12:34:26 -- accel/accel.sh@21 -- # val= 00:06:53.496 12:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 12:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 12:34:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.496 12:34:26 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:53.496 12:34:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.496 00:06:53.496 real 0m2.559s 00:06:53.496 user 0m2.359s 00:06:53.496 sys 0m0.206s 00:06:53.496 12:34:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.496 12:34:26 -- common/autotest_common.sh@10 -- # set +x 00:06:53.496 ************************************ 00:06:53.496 END TEST accel_xor 00:06:53.496 ************************************ 00:06:53.757 12:34:26 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:53.757 12:34:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:53.757 12:34:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.757 12:34:26 -- common/autotest_common.sh@10 -- # set +x 00:06:53.757 ************************************ 00:06:53.757 START TEST accel_dif_verify 00:06:53.757 ************************************ 00:06:53.757 12:34:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:53.757 12:34:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.757 12:34:26 -- accel/accel.sh@17 -- # local accel_module 00:06:53.757 12:34:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:53.757 12:34:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:53.757 12:34:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.757 12:34:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.757 12:34:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.757 12:34:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.757 12:34:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.757 12:34:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.757 12:34:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.757 12:34:26 -- accel/accel.sh@42 -- # jq -r . 00:06:53.757 [2024-11-20 12:34:26.670852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.757 [2024-11-20 12:34:26.670929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332493 ] 00:06:53.757 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.757 [2024-11-20 12:34:26.733613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.757 [2024-11-20 12:34:26.799341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.143 12:34:27 -- accel/accel.sh@18 -- # out=' 00:06:55.143 SPDK Configuration: 00:06:55.143 Core mask: 0x1 00:06:55.143 00:06:55.143 Accel Perf Configuration: 00:06:55.143 Workload Type: dif_verify 00:06:55.143 Vector size: 4096 bytes 00:06:55.143 Transfer size: 4096 bytes 00:06:55.143 Block size: 512 bytes 00:06:55.143 Metadata size: 8 bytes 00:06:55.143 Vector count 1 00:06:55.143 Module: software 00:06:55.143 Queue depth: 32 00:06:55.143 Allocate depth: 32 00:06:55.143 # threads/core: 1 00:06:55.143 Run time: 1 seconds 00:06:55.143 Verify: No 00:06:55.143 00:06:55.143 Running for 1 seconds... 00:06:55.143 00:06:55.143 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.144 ------------------------------------------------------------------------------------ 00:06:55.144 0,0 95040/s 377 MiB/s 0 0 00:06:55.144 ==================================================================================== 00:06:55.144 Total 95040/s 371 MiB/s 0 0' 00:06:55.144 12:34:27 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:27 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:55.144 12:34:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:55.144 12:34:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.144 12:34:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.144 12:34:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.144 12:34:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.144 12:34:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.144 12:34:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.144 12:34:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.144 12:34:27 -- accel/accel.sh@42 -- # jq -r . 00:06:55.144 [2024-11-20 12:34:27.951819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.144 [2024-11-20 12:34:27.951888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332635 ] 00:06:55.144 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.144 [2024-11-20 12:34:28.013134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.144 [2024-11-20 12:34:28.075753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val= 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val= 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val=0x1 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val= 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val= 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val=dif_verify 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val= 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val=software 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val=32 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val=32 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val=1 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val=No 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val= 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:55.144 12:34:28 -- accel/accel.sh@21 -- # val= 00:06:55.144 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:55.144 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:56.530 12:34:29 -- accel/accel.sh@21 -- # val= 00:06:56.530 12:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.530 12:34:29 -- accel/accel.sh@21 -- # val= 00:06:56.530 12:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.530 12:34:29 -- accel/accel.sh@21 -- # val= 00:06:56.530 12:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.530 12:34:29 -- accel/accel.sh@21 -- # val= 00:06:56.530 12:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.530 12:34:29 -- accel/accel.sh@21 -- # val= 00:06:56.530 12:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.530 12:34:29 -- accel/accel.sh@21 -- # val= 00:06:56.530 12:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.530 12:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.530 12:34:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.530 12:34:29 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:56.530 12:34:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.530 00:06:56.530 real 0m2.563s 00:06:56.530 user 0m2.370s 00:06:56.530 sys 0m0.200s 00:06:56.530 12:34:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.530 12:34:29 -- common/autotest_common.sh@10 -- # set +x 00:06:56.530 ************************************ 00:06:56.530 END TEST accel_dif_verify 00:06:56.530 ************************************ 00:06:56.530 12:34:29 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:56.530 12:34:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:56.530 12:34:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.530 12:34:29 -- common/autotest_common.sh@10 -- # set +x 00:06:56.530 ************************************ 00:06:56.530 START TEST accel_dif_generate 00:06:56.530 ************************************ 00:06:56.530 12:34:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:56.530 12:34:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.530 12:34:29 -- accel/accel.sh@17 -- # local accel_module 00:06:56.530 12:34:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:56.530 12:34:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:56.530 12:34:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.530 12:34:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.530 12:34:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.530 12:34:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.530 12:34:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.530 12:34:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.530 12:34:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.530 12:34:29 -- accel/accel.sh@42 -- # jq -r . 00:06:56.530 [2024-11-20 12:34:29.274705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.530 [2024-11-20 12:34:29.274784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332962 ] 00:06:56.530 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.530 [2024-11-20 12:34:29.337316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.530 [2024-11-20 12:34:29.400301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.474 12:34:30 -- accel/accel.sh@18 -- # out=' 00:06:57.474 SPDK Configuration: 00:06:57.474 Core mask: 0x1 00:06:57.474 00:06:57.474 Accel Perf Configuration: 00:06:57.474 Workload Type: dif_generate 00:06:57.474 Vector size: 4096 bytes 00:06:57.474 Transfer size: 4096 bytes 00:06:57.474 Block size: 512 bytes 00:06:57.474 Metadata size: 8 bytes 00:06:57.474 Vector count 1 00:06:57.474 Module: software 00:06:57.474 Queue depth: 32 00:06:57.474 Allocate depth: 32 00:06:57.474 # threads/core: 1 00:06:57.474 Run time: 1 seconds 00:06:57.474 Verify: No 00:06:57.474 00:06:57.474 Running for 1 seconds... 00:06:57.474 00:06:57.474 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.474 ------------------------------------------------------------------------------------ 00:06:57.474 0,0 113792/s 451 MiB/s 0 0 00:06:57.474 ==================================================================================== 00:06:57.474 Total 113792/s 444 MiB/s 0 0' 00:06:57.474 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.474 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.474 12:34:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:57.474 12:34:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:57.474 12:34:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.474 12:34:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.474 12:34:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.474 12:34:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.474 12:34:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.474 12:34:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.474 12:34:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.474 12:34:30 -- accel/accel.sh@42 -- # jq -r . 00:06:57.474 [2024-11-20 12:34:30.553859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.474 [2024-11-20 12:34:30.553957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333301 ] 00:06:57.735 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.735 [2024-11-20 12:34:30.616532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.735 [2024-11-20 12:34:30.677224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val= 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val= 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val=0x1 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val= 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val= 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val=dif_generate 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val= 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val=software 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val=32 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val=32 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val=1 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val=No 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val= 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.735 12:34:30 -- accel/accel.sh@21 -- # val= 00:06:57.735 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.735 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 12:34:31 -- accel/accel.sh@21 -- # val= 00:06:59.119 12:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 12:34:31 -- accel/accel.sh@21 -- # val= 00:06:59.119 12:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 12:34:31 -- accel/accel.sh@21 -- # val= 00:06:59.119 12:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 12:34:31 -- accel/accel.sh@21 -- # val= 00:06:59.119 12:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 12:34:31 -- accel/accel.sh@21 -- # val= 00:06:59.119 12:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 12:34:31 -- accel/accel.sh@21 -- # val= 00:06:59.119 12:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 12:34:31 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 12:34:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.119 12:34:31 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:59.119 12:34:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.119 00:06:59.119 real 0m2.559s 00:06:59.119 user 0m2.370s 00:06:59.119 sys 0m0.196s 00:06:59.119 12:34:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.119 12:34:31 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 ************************************ 00:06:59.119 END TEST accel_dif_generate 00:06:59.119 ************************************ 00:06:59.119 12:34:31 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:59.119 12:34:31 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:59.119 12:34:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.119 12:34:31 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 ************************************ 00:06:59.119 START TEST accel_dif_generate_copy 00:06:59.119 ************************************ 00:06:59.119 12:34:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:59.120 12:34:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.120 12:34:31 -- accel/accel.sh@17 -- # local accel_module 00:06:59.120 12:34:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:59.120 12:34:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:59.120 12:34:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.120 12:34:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.120 12:34:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.120 12:34:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.120 12:34:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.120 12:34:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.120 12:34:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.120 12:34:31 -- accel/accel.sh@42 -- # jq -r . 00:06:59.120 [2024-11-20 12:34:31.879028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.120 [2024-11-20 12:34:31.879099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333614 ] 00:06:59.120 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.120 [2024-11-20 12:34:31.941254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.120 [2024-11-20 12:34:32.003948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.060 12:34:33 -- accel/accel.sh@18 -- # out=' 00:07:00.060 SPDK Configuration: 00:07:00.060 Core mask: 0x1 00:07:00.060 00:07:00.060 Accel Perf Configuration: 00:07:00.060 Workload Type: dif_generate_copy 00:07:00.060 Vector size: 4096 bytes 00:07:00.060 Transfer size: 4096 bytes 00:07:00.060 Vector count 1 00:07:00.060 Module: software 00:07:00.060 Queue depth: 32 00:07:00.060 Allocate depth: 32 00:07:00.060 # threads/core: 1 00:07:00.060 Run time: 1 seconds 00:07:00.060 Verify: No 00:07:00.060 00:07:00.060 Running for 1 seconds... 00:07:00.061 00:07:00.061 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.061 ------------------------------------------------------------------------------------ 00:07:00.061 0,0 87232/s 346 MiB/s 0 0 00:07:00.061 ==================================================================================== 00:07:00.061 Total 87232/s 340 MiB/s 0 0' 00:07:00.061 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.061 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.061 12:34:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:00.061 12:34:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:00.061 12:34:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.061 12:34:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.061 12:34:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.061 12:34:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.061 12:34:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.061 12:34:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.061 12:34:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.061 12:34:33 -- accel/accel.sh@42 -- # jq -r . 00:07:00.061 [2024-11-20 12:34:33.156654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.061 [2024-11-20 12:34:33.156756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333740 ] 00:07:00.322 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.322 [2024-11-20 12:34:33.218825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.322 [2024-11-20 12:34:33.285996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val= 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val= 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val=0x1 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val= 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val= 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val= 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val=software 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val=32 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val=32 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val=1 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val=No 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val= 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.322 12:34:33 -- accel/accel.sh@21 -- # val= 00:07:00.322 12:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.322 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:01.708 12:34:34 -- accel/accel.sh@21 -- # val= 00:07:01.708 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:01.708 12:34:34 -- accel/accel.sh@21 -- # val= 00:07:01.708 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:01.708 12:34:34 -- accel/accel.sh@21 -- # val= 00:07:01.708 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:01.708 12:34:34 -- accel/accel.sh@21 -- # val= 00:07:01.708 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:01.708 12:34:34 -- accel/accel.sh@21 -- # val= 00:07:01.708 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:01.708 12:34:34 -- accel/accel.sh@21 -- # val= 00:07:01.708 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:01.708 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:01.708 12:34:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.708 12:34:34 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:01.708 12:34:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.708 00:07:01.708 real 0m2.565s 00:07:01.708 user 0m2.370s 00:07:01.708 sys 0m0.202s 00:07:01.708 12:34:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.708 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:07:01.708 ************************************ 00:07:01.708 END TEST accel_dif_generate_copy 00:07:01.708 ************************************ 00:07:01.708 12:34:34 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:01.708 12:34:34 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:01.708 12:34:34 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:01.708 12:34:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.708 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:07:01.708 ************************************ 00:07:01.708 START TEST accel_comp 00:07:01.708 ************************************ 00:07:01.708 12:34:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:01.708 12:34:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.708 12:34:34 -- accel/accel.sh@17 -- # local accel_module 00:07:01.708 12:34:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:01.708 12:34:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:01.708 12:34:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.708 12:34:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.708 12:34:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.708 12:34:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.708 12:34:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.708 12:34:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.708 12:34:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.708 12:34:34 -- accel/accel.sh@42 -- # jq -r . 00:07:01.708 [2024-11-20 12:34:34.485730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.708 [2024-11-20 12:34:34.485800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334023 ] 00:07:01.708 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.708 [2024-11-20 12:34:34.547070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.708 [2024-11-20 12:34:34.608814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.650 12:34:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:02.650 00:07:02.650 SPDK Configuration: 00:07:02.650 Core mask: 0x1 00:07:02.650 00:07:02.650 Accel Perf Configuration: 00:07:02.650 Workload Type: compress 00:07:02.650 Transfer size: 4096 bytes 00:07:02.650 Vector count 1 00:07:02.650 Module: software 00:07:02.650 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:02.650 Queue depth: 32 00:07:02.650 Allocate depth: 32 00:07:02.650 # threads/core: 1 00:07:02.650 Run time: 1 seconds 00:07:02.650 Verify: No 00:07:02.650 00:07:02.650 Running for 1 seconds... 00:07:02.650 00:07:02.650 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.650 ------------------------------------------------------------------------------------ 00:07:02.650 0,0 47392/s 197 MiB/s 0 0 00:07:02.650 ==================================================================================== 00:07:02.650 Total 47392/s 185 MiB/s 0 0' 00:07:02.650 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.650 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.650 12:34:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:02.650 12:34:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:02.650 12:34:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.650 12:34:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.650 12:34:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.650 12:34:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.650 12:34:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.650 12:34:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.650 12:34:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.650 12:34:35 -- accel/accel.sh@42 -- # jq -r . 00:07:02.911 [2024-11-20 12:34:35.762687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.911 [2024-11-20 12:34:35.762762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334359 ] 00:07:02.911 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.911 [2024-11-20 12:34:35.824200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.911 [2024-11-20 12:34:35.886244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.911 12:34:35 -- accel/accel.sh@21 -- # val= 00:07:02.911 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.911 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.911 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.911 12:34:35 -- accel/accel.sh@21 -- # val= 00:07:02.911 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.911 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.911 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.911 12:34:35 -- accel/accel.sh@21 -- # val= 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val=0x1 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val= 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val= 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val=compress 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val= 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val=software 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val=32 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val=32 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val=1 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val=No 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val= 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.912 12:34:35 -- accel/accel.sh@21 -- # val= 00:07:02.912 12:34:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.912 12:34:35 -- accel/accel.sh@20 -- # read -r var val 00:07:04.299 12:34:37 -- accel/accel.sh@21 -- # val= 00:07:04.299 12:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.299 12:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:04.299 12:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:04.300 12:34:37 -- accel/accel.sh@21 -- # val= 00:07:04.300 12:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.300 12:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:04.300 12:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:04.300 12:34:37 -- accel/accel.sh@21 -- # val= 00:07:04.300 12:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.300 12:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:04.300 12:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:04.300 12:34:37 -- accel/accel.sh@21 -- # val= 00:07:04.300 12:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.300 12:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:04.300 12:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:04.300 12:34:37 -- accel/accel.sh@21 -- # val= 00:07:04.300 12:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.300 12:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:04.300 12:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:04.300 12:34:37 -- accel/accel.sh@21 -- # val= 00:07:04.300 12:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.300 12:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:04.300 12:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:04.300 12:34:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.300 12:34:37 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:04.300 12:34:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.300 00:07:04.300 real 0m2.559s 00:07:04.300 user 0m2.373s 00:07:04.300 sys 0m0.193s 00:07:04.300 12:34:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.300 12:34:37 -- common/autotest_common.sh@10 -- # set +x 00:07:04.300 ************************************ 00:07:04.300 END TEST accel_comp 00:07:04.300 ************************************ 00:07:04.300 12:34:37 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.300 12:34:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:04.300 12:34:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.300 12:34:37 -- common/autotest_common.sh@10 -- # set +x 00:07:04.300 ************************************ 00:07:04.300 START TEST accel_decomp 00:07:04.300 ************************************ 00:07:04.300 12:34:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.300 12:34:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.300 12:34:37 -- accel/accel.sh@17 -- # local accel_module 00:07:04.300 12:34:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.300 12:34:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.300 12:34:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.300 12:34:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.300 12:34:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.300 12:34:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.300 12:34:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.300 12:34:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.300 12:34:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.300 12:34:37 -- accel/accel.sh@42 -- # jq -r . 00:07:04.300 [2024-11-20 12:34:37.088489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.300 [2024-11-20 12:34:37.088564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334711 ] 00:07:04.300 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.300 [2024-11-20 12:34:37.151162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.300 [2024-11-20 12:34:37.215804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.243 12:34:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:05.243 00:07:05.243 SPDK Configuration: 00:07:05.243 Core mask: 0x1 00:07:05.243 00:07:05.243 Accel Perf Configuration: 00:07:05.243 Workload Type: decompress 00:07:05.243 Transfer size: 4096 bytes 00:07:05.243 Vector count 1 00:07:05.243 Module: software 00:07:05.243 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:05.243 Queue depth: 32 00:07:05.243 Allocate depth: 32 00:07:05.243 # threads/core: 1 00:07:05.243 Run time: 1 seconds 00:07:05.243 Verify: Yes 00:07:05.243 00:07:05.243 Running for 1 seconds... 00:07:05.243 00:07:05.243 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.243 ------------------------------------------------------------------------------------ 00:07:05.243 0,0 62848/s 115 MiB/s 0 0 00:07:05.243 ==================================================================================== 00:07:05.243 Total 62848/s 245 MiB/s 0 0' 00:07:05.243 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.243 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.243 12:34:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:05.243 12:34:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:05.243 12:34:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.243 12:34:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.243 12:34:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.243 12:34:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.243 12:34:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.243 12:34:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.243 12:34:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.243 12:34:38 -- accel/accel.sh@42 -- # jq -r . 00:07:05.504 [2024-11-20 12:34:38.370252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.504 [2024-11-20 12:34:38.370346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334872 ] 00:07:05.504 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.504 [2024-11-20 12:34:38.433256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.504 [2024-11-20 12:34:38.495744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.504 12:34:38 -- accel/accel.sh@21 -- # val= 00:07:05.504 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.504 12:34:38 -- accel/accel.sh@21 -- # val= 00:07:05.504 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.504 12:34:38 -- accel/accel.sh@21 -- # val= 00:07:05.504 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.504 12:34:38 -- accel/accel.sh@21 -- # val=0x1 00:07:05.504 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.504 12:34:38 -- accel/accel.sh@21 -- # val= 00:07:05.504 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.504 12:34:38 -- accel/accel.sh@21 -- # val= 00:07:05.504 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.504 12:34:38 -- accel/accel.sh@21 -- # val=decompress 00:07:05.504 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.504 12:34:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:05.504 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val= 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val=software 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val=32 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val=32 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val=1 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val=Yes 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val= 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.505 12:34:38 -- accel/accel.sh@21 -- # val= 00:07:05.505 12:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.505 12:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:06.896 12:34:39 -- accel/accel.sh@21 -- # val= 00:07:06.896 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.896 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.896 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.896 12:34:39 -- accel/accel.sh@21 -- # val= 00:07:06.896 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.896 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.896 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.896 12:34:39 -- accel/accel.sh@21 -- # val= 00:07:06.896 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.896 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.896 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.896 12:34:39 -- accel/accel.sh@21 -- # val= 00:07:06.897 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.897 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.897 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.897 12:34:39 -- accel/accel.sh@21 -- # val= 00:07:06.897 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.897 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.897 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.897 12:34:39 -- accel/accel.sh@21 -- # val= 00:07:06.897 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.897 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.897 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.897 12:34:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.897 12:34:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:06.897 12:34:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.897 00:07:06.897 real 0m2.566s 00:07:06.897 user 0m2.381s 00:07:06.897 sys 0m0.193s 00:07:06.897 12:34:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.897 12:34:39 -- common/autotest_common.sh@10 -- # set +x 00:07:06.897 ************************************ 00:07:06.897 END TEST accel_decomp 00:07:06.897 ************************************ 00:07:06.897 12:34:39 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:06.897 12:34:39 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:06.897 12:34:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.897 12:34:39 -- common/autotest_common.sh@10 -- # set +x 00:07:06.897 ************************************ 00:07:06.897 START TEST accel_decmop_full 00:07:06.897 ************************************ 00:07:06.897 12:34:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:06.897 12:34:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.897 12:34:39 -- accel/accel.sh@17 -- # local accel_module 00:07:06.897 12:34:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:06.897 12:34:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:06.897 12:34:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.897 12:34:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.897 12:34:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.897 12:34:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.897 12:34:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.897 12:34:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.897 12:34:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.897 12:34:39 -- accel/accel.sh@42 -- # jq -r . 00:07:06.897 [2024-11-20 12:34:39.698946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.897 [2024-11-20 12:34:39.699052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335081 ] 00:07:06.897 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.897 [2024-11-20 12:34:39.762620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.897 [2024-11-20 12:34:39.825940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.282 12:34:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:08.282 00:07:08.282 SPDK Configuration: 00:07:08.282 Core mask: 0x1 00:07:08.282 00:07:08.282 Accel Perf Configuration: 00:07:08.282 Workload Type: decompress 00:07:08.282 Transfer size: 111250 bytes 00:07:08.282 Vector count 1 00:07:08.282 Module: software 00:07:08.282 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:08.282 Queue depth: 32 00:07:08.282 Allocate depth: 32 00:07:08.282 # threads/core: 1 00:07:08.282 Run time: 1 seconds 00:07:08.282 Verify: Yes 00:07:08.282 00:07:08.282 Running for 1 seconds... 00:07:08.282 00:07:08.282 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.282 ------------------------------------------------------------------------------------ 00:07:08.282 0,0 4064/s 167 MiB/s 0 0 00:07:08.282 ==================================================================================== 00:07:08.282 Total 4064/s 431 MiB/s 0 0' 00:07:08.282 12:34:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:08.282 12:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:08.282 12:34:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:08.282 12:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:08.282 12:34:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.282 12:34:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.282 12:34:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.282 12:34:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.282 12:34:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.282 12:34:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.282 12:34:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.282 12:34:40 -- accel/accel.sh@42 -- # jq -r . 00:07:08.282 [2024-11-20 12:34:40.975847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.282 [2024-11-20 12:34:40.975891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335415 ] 00:07:08.282 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.282 [2024-11-20 12:34:41.026966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.282 [2024-11-20 12:34:41.089242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.282 12:34:41 -- accel/accel.sh@21 -- # val= 00:07:08.282 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.282 12:34:41 -- accel/accel.sh@21 -- # val= 00:07:08.282 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.282 12:34:41 -- accel/accel.sh@21 -- # val= 00:07:08.282 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.282 12:34:41 -- accel/accel.sh@21 -- # val=0x1 00:07:08.282 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.282 12:34:41 -- accel/accel.sh@21 -- # val= 00:07:08.282 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.282 12:34:41 -- accel/accel.sh@21 -- # val= 00:07:08.282 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.282 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.282 12:34:41 -- accel/accel.sh@21 -- # val=decompress 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val= 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val=software 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val=32 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val=32 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val=1 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val=Yes 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val= 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:08.283 12:34:41 -- accel/accel.sh@21 -- # val= 00:07:08.283 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:08.283 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:09.227 12:34:42 -- accel/accel.sh@21 -- # val= 00:07:09.227 12:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.227 12:34:42 -- accel/accel.sh@21 -- # val= 00:07:09.227 12:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.227 12:34:42 -- accel/accel.sh@21 -- # val= 00:07:09.227 12:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.227 12:34:42 -- accel/accel.sh@21 -- # val= 00:07:09.227 12:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.227 12:34:42 -- accel/accel.sh@21 -- # val= 00:07:09.227 12:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.227 12:34:42 -- accel/accel.sh@21 -- # val= 00:07:09.227 12:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.227 12:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.227 12:34:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.227 12:34:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:09.227 12:34:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.227 00:07:09.227 real 0m2.564s 00:07:09.227 user 0m2.374s 00:07:09.227 sys 0m0.196s 00:07:09.227 12:34:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.227 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:09.227 ************************************ 00:07:09.227 END TEST accel_decmop_full 00:07:09.227 ************************************ 00:07:09.227 12:34:42 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.227 12:34:42 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:09.227 12:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.227 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:09.227 ************************************ 00:07:09.227 START TEST accel_decomp_mcore 00:07:09.227 ************************************ 00:07:09.227 12:34:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.227 12:34:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.227 12:34:42 -- accel/accel.sh@17 -- # local accel_module 00:07:09.227 12:34:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.227 12:34:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.227 12:34:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.227 12:34:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.227 12:34:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.227 12:34:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.227 12:34:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.227 12:34:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.227 12:34:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.227 12:34:42 -- accel/accel.sh@42 -- # jq -r . 00:07:09.227 [2024-11-20 12:34:42.301440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.227 [2024-11-20 12:34:42.301512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335772 ] 00:07:09.227 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.488 [2024-11-20 12:34:42.363412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.488 [2024-11-20 12:34:42.429102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.488 [2024-11-20 12:34:42.429341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.488 [2024-11-20 12:34:42.429496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.488 [2024-11-20 12:34:42.429496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.874 12:34:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.874 00:07:10.874 SPDK Configuration: 00:07:10.874 Core mask: 0xf 00:07:10.874 00:07:10.874 Accel Perf Configuration: 00:07:10.874 Workload Type: decompress 00:07:10.874 Transfer size: 4096 bytes 00:07:10.874 Vector count 1 00:07:10.874 Module: software 00:07:10.874 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:10.874 Queue depth: 32 00:07:10.874 Allocate depth: 32 00:07:10.874 # threads/core: 1 00:07:10.874 Run time: 1 seconds 00:07:10.874 Verify: Yes 00:07:10.874 00:07:10.874 Running for 1 seconds... 00:07:10.874 00:07:10.874 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.874 ------------------------------------------------------------------------------------ 00:07:10.874 0,0 58304/s 107 MiB/s 0 0 00:07:10.875 3,0 58880/s 108 MiB/s 0 0 00:07:10.875 2,0 86112/s 158 MiB/s 0 0 00:07:10.875 1,0 58944/s 108 MiB/s 0 0 00:07:10.875 ==================================================================================== 00:07:10.875 Total 262240/s 1024 MiB/s 0 0' 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:10.875 12:34:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:10.875 12:34:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.875 12:34:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.875 12:34:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.875 12:34:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.875 12:34:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.875 12:34:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.875 12:34:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.875 12:34:43 -- accel/accel.sh@42 -- # jq -r . 00:07:10.875 [2024-11-20 12:34:43.590479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.875 [2024-11-20 12:34:43.590558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335975 ] 00:07:10.875 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.875 [2024-11-20 12:34:43.653545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.875 [2024-11-20 12:34:43.718403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.875 [2024-11-20 12:34:43.718520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.875 [2024-11-20 12:34:43.718674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.875 [2024-11-20 12:34:43.718674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val= 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val= 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val= 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val=0xf 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val= 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val= 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val=decompress 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val= 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val=software 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val=32 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val=32 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val=1 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val=Yes 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val= 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.875 12:34:43 -- accel/accel.sh@21 -- # val= 00:07:10.875 12:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.875 12:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:11.871 12:34:44 -- accel/accel.sh@21 -- # val= 00:07:11.871 12:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:11.871 12:34:44 -- accel/accel.sh@21 -- # val= 00:07:11.871 12:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:11.871 12:34:44 -- accel/accel.sh@21 -- # val= 00:07:11.871 12:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:11.871 12:34:44 -- accel/accel.sh@21 -- # val= 00:07:11.871 12:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:11.871 12:34:44 -- accel/accel.sh@21 -- # val= 00:07:11.871 12:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:11.871 12:34:44 -- accel/accel.sh@21 -- # val= 00:07:11.871 12:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:11.871 12:34:44 -- accel/accel.sh@21 -- # val= 00:07:11.871 12:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:11.871 12:34:44 -- accel/accel.sh@21 -- # val= 00:07:11.871 12:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:11.871 12:34:44 -- accel/accel.sh@21 -- # val= 00:07:11.871 12:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:11.871 12:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:11.871 12:34:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.871 12:34:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:11.871 12:34:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.871 00:07:11.871 real 0m2.584s 00:07:11.871 user 0m8.846s 00:07:11.871 sys 0m0.215s 00:07:11.871 12:34:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.871 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.871 ************************************ 00:07:11.871 END TEST accel_decomp_mcore 00:07:11.871 ************************************ 00:07:11.871 12:34:44 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:11.871 12:34:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:11.871 12:34:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.871 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.871 ************************************ 00:07:11.871 START TEST accel_decomp_full_mcore 00:07:11.871 ************************************ 00:07:11.871 12:34:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:11.871 12:34:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.871 12:34:44 -- accel/accel.sh@17 -- # local accel_module 00:07:11.871 12:34:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:11.871 12:34:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:11.872 12:34:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.872 12:34:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.872 12:34:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.872 12:34:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.872 12:34:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.872 12:34:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.872 12:34:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.872 12:34:44 -- accel/accel.sh@42 -- # jq -r . 00:07:11.872 [2024-11-20 12:34:44.928653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.872 [2024-11-20 12:34:44.928738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336172 ] 00:07:11.872 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.132 [2024-11-20 12:34:44.991722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.132 [2024-11-20 12:34:45.059790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.132 [2024-11-20 12:34:45.059929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.132 [2024-11-20 12:34:45.060080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.132 [2024-11-20 12:34:45.060081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.517 12:34:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:13.517 00:07:13.517 SPDK Configuration: 00:07:13.517 Core mask: 0xf 00:07:13.517 00:07:13.517 Accel Perf Configuration: 00:07:13.517 Workload Type: decompress 00:07:13.517 Transfer size: 111250 bytes 00:07:13.517 Vector count 1 00:07:13.517 Module: software 00:07:13.517 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:13.517 Queue depth: 32 00:07:13.517 Allocate depth: 32 00:07:13.517 # threads/core: 1 00:07:13.517 Run time: 1 seconds 00:07:13.517 Verify: Yes 00:07:13.517 00:07:13.517 Running for 1 seconds... 00:07:13.517 00:07:13.517 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.517 ------------------------------------------------------------------------------------ 00:07:13.517 0,0 4064/s 167 MiB/s 0 0 00:07:13.517 3,0 4096/s 169 MiB/s 0 0 00:07:13.517 2,0 5920/s 244 MiB/s 0 0 00:07:13.517 1,0 4064/s 167 MiB/s 0 0 00:07:13.517 ==================================================================================== 00:07:13.517 Total 18144/s 1925 MiB/s 0 0' 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.517 12:34:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.517 12:34:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.517 12:34:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.517 12:34:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.517 12:34:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.517 12:34:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.517 12:34:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.517 12:34:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.517 12:34:46 -- accel/accel.sh@42 -- # jq -r . 00:07:13.517 [2024-11-20 12:34:46.235445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.517 [2024-11-20 12:34:46.235564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336486 ] 00:07:13.517 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.517 [2024-11-20 12:34:46.303600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.517 [2024-11-20 12:34:46.368060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.517 [2024-11-20 12:34:46.368173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.517 [2024-11-20 12:34:46.368325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.517 [2024-11-20 12:34:46.368326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val= 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val= 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val= 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val=0xf 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val= 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val= 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val=decompress 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val= 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val=software 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val=32 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val=32 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val=1 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val=Yes 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val= 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.517 12:34:46 -- accel/accel.sh@21 -- # val= 00:07:13.517 12:34:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.517 12:34:46 -- accel/accel.sh@20 -- # read -r var val 00:07:14.459 12:34:47 -- accel/accel.sh@21 -- # val= 00:07:14.459 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.459 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:14.459 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:14.459 12:34:47 -- accel/accel.sh@21 -- # val= 00:07:14.460 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:14.460 12:34:47 -- accel/accel.sh@21 -- # val= 00:07:14.460 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:14.460 12:34:47 -- accel/accel.sh@21 -- # val= 00:07:14.460 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:14.460 12:34:47 -- accel/accel.sh@21 -- # val= 00:07:14.460 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:14.460 12:34:47 -- accel/accel.sh@21 -- # val= 00:07:14.460 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:14.460 12:34:47 -- accel/accel.sh@21 -- # val= 00:07:14.460 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:14.460 12:34:47 -- accel/accel.sh@21 -- # val= 00:07:14.460 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:14.460 12:34:47 -- accel/accel.sh@21 -- # val= 00:07:14.460 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:14.460 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:14.460 12:34:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.460 12:34:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:14.460 12:34:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.460 00:07:14.460 real 0m2.614s 00:07:14.460 user 0m8.926s 00:07:14.460 sys 0m0.223s 00:07:14.460 12:34:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.460 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:07:14.460 ************************************ 00:07:14.460 END TEST accel_decomp_full_mcore 00:07:14.460 ************************************ 00:07:14.460 12:34:47 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:14.460 12:34:47 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:14.460 12:34:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.460 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:07:14.460 ************************************ 00:07:14.460 START TEST accel_decomp_mthread 00:07:14.460 ************************************ 00:07:14.460 12:34:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:14.460 12:34:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.460 12:34:47 -- accel/accel.sh@17 -- # local accel_module 00:07:14.460 12:34:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:14.460 12:34:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:14.460 12:34:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.460 12:34:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.460 12:34:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.460 12:34:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.460 12:34:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.460 12:34:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.460 12:34:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.460 12:34:47 -- accel/accel.sh@42 -- # jq -r . 00:07:14.721 [2024-11-20 12:34:47.588144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.721 [2024-11-20 12:34:47.588223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336846 ] 00:07:14.721 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.721 [2024-11-20 12:34:47.649852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.721 [2024-11-20 12:34:47.711743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.109 12:34:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.109 00:07:16.109 SPDK Configuration: 00:07:16.109 Core mask: 0x1 00:07:16.109 00:07:16.109 Accel Perf Configuration: 00:07:16.109 Workload Type: decompress 00:07:16.109 Transfer size: 4096 bytes 00:07:16.109 Vector count 1 00:07:16.109 Module: software 00:07:16.109 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:16.109 Queue depth: 32 00:07:16.109 Allocate depth: 32 00:07:16.109 # threads/core: 2 00:07:16.109 Run time: 1 seconds 00:07:16.109 Verify: Yes 00:07:16.109 00:07:16.109 Running for 1 seconds... 00:07:16.109 00:07:16.109 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.109 ------------------------------------------------------------------------------------ 00:07:16.109 0,1 31616/s 58 MiB/s 0 0 00:07:16.109 0,0 31488/s 58 MiB/s 0 0 00:07:16.109 ==================================================================================== 00:07:16.109 Total 63104/s 246 MiB/s 0 0' 00:07:16.109 12:34:48 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:48 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.109 12:34:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.109 12:34:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.109 12:34:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.109 12:34:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.109 12:34:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.109 12:34:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.109 12:34:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.109 12:34:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.109 12:34:48 -- accel/accel.sh@42 -- # jq -r . 00:07:16.109 [2024-11-20 12:34:48.869235] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.109 [2024-11-20 12:34:48.869312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337159 ] 00:07:16.109 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.109 [2024-11-20 12:34:48.930628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.109 [2024-11-20 12:34:48.993423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val= 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val= 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val= 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val=0x1 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val= 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val= 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val=decompress 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val= 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val=software 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val=32 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val=32 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.109 12:34:49 -- accel/accel.sh@21 -- # val=2 00:07:16.109 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.109 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.110 12:34:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.110 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.110 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.110 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.110 12:34:49 -- accel/accel.sh@21 -- # val=Yes 00:07:16.110 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.110 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.110 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.110 12:34:49 -- accel/accel.sh@21 -- # val= 00:07:16.110 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.110 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.110 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.110 12:34:49 -- accel/accel.sh@21 -- # val= 00:07:16.110 12:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.110 12:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.110 12:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:17.052 12:34:50 -- accel/accel.sh@21 -- # val= 00:07:17.052 12:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.052 12:34:50 -- accel/accel.sh@21 -- # val= 00:07:17.052 12:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.052 12:34:50 -- accel/accel.sh@21 -- # val= 00:07:17.052 12:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.052 12:34:50 -- accel/accel.sh@21 -- # val= 00:07:17.052 12:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.052 12:34:50 -- accel/accel.sh@21 -- # val= 00:07:17.052 12:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.052 12:34:50 -- accel/accel.sh@21 -- # val= 00:07:17.052 12:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.052 12:34:50 -- accel/accel.sh@21 -- # val= 00:07:17.052 12:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.052 12:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.052 12:34:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.052 12:34:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:17.052 12:34:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.052 00:07:17.052 real 0m2.571s 00:07:17.052 user 0m2.381s 00:07:17.052 sys 0m0.198s 00:07:17.052 12:34:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.052 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:07:17.052 ************************************ 00:07:17.052 END TEST accel_decomp_mthread 00:07:17.052 ************************************ 00:07:17.313 12:34:50 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.313 12:34:50 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:17.313 12:34:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.313 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:07:17.313 ************************************ 00:07:17.313 START TEST accel_deomp_full_mthread 00:07:17.313 ************************************ 00:07:17.313 12:34:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.313 12:34:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.313 12:34:50 -- accel/accel.sh@17 -- # local accel_module 00:07:17.313 12:34:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.313 12:34:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.313 12:34:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.313 12:34:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.313 12:34:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.313 12:34:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.313 12:34:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.313 12:34:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.313 12:34:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.313 12:34:50 -- accel/accel.sh@42 -- # jq -r . 00:07:17.313 [2024-11-20 12:34:50.200036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.314 [2024-11-20 12:34:50.200131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337336 ] 00:07:17.314 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.314 [2024-11-20 12:34:50.261596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.314 [2024-11-20 12:34:50.323940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.700 12:34:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:18.700 00:07:18.700 SPDK Configuration: 00:07:18.700 Core mask: 0x1 00:07:18.700 00:07:18.700 Accel Perf Configuration: 00:07:18.700 Workload Type: decompress 00:07:18.700 Transfer size: 111250 bytes 00:07:18.700 Vector count 1 00:07:18.700 Module: software 00:07:18.700 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:18.700 Queue depth: 32 00:07:18.700 Allocate depth: 32 00:07:18.700 # threads/core: 2 00:07:18.700 Run time: 1 seconds 00:07:18.700 Verify: Yes 00:07:18.700 00:07:18.700 Running for 1 seconds... 00:07:18.700 00:07:18.700 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.700 ------------------------------------------------------------------------------------ 00:07:18.700 0,1 2112/s 87 MiB/s 0 0 00:07:18.700 0,0 2048/s 84 MiB/s 0 0 00:07:18.700 ==================================================================================== 00:07:18.700 Total 4160/s 441 MiB/s 0 0' 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.700 12:34:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.700 12:34:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.700 12:34:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.700 12:34:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.700 12:34:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.700 12:34:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.700 12:34:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.700 12:34:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.700 12:34:51 -- accel/accel.sh@42 -- # jq -r . 00:07:18.700 [2024-11-20 12:34:51.511807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.700 [2024-11-20 12:34:51.511901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337557 ] 00:07:18.700 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.700 [2024-11-20 12:34:51.575236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.700 [2024-11-20 12:34:51.637182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val= 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val= 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val= 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val=0x1 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val= 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val= 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val=decompress 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val= 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val=software 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val=32 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val=32 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val=2 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val=Yes 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val= 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:18.700 12:34:51 -- accel/accel.sh@21 -- # val= 00:07:18.700 12:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:18.700 12:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:20.085 12:34:52 -- accel/accel.sh@21 -- # val= 00:07:20.086 12:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:20.086 12:34:52 -- accel/accel.sh@21 -- # val= 00:07:20.086 12:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:20.086 12:34:52 -- accel/accel.sh@21 -- # val= 00:07:20.086 12:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:20.086 12:34:52 -- accel/accel.sh@21 -- # val= 00:07:20.086 12:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:20.086 12:34:52 -- accel/accel.sh@21 -- # val= 00:07:20.086 12:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:20.086 12:34:52 -- accel/accel.sh@21 -- # val= 00:07:20.086 12:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:20.086 12:34:52 -- accel/accel.sh@21 -- # val= 00:07:20.086 12:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:20.086 12:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:20.086 12:34:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.086 12:34:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.086 12:34:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.086 00:07:20.086 real 0m2.630s 00:07:20.086 user 0m2.430s 00:07:20.086 sys 0m0.206s 00:07:20.086 12:34:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.086 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:07:20.086 ************************************ 00:07:20.086 END TEST accel_deomp_full_mthread 00:07:20.086 ************************************ 00:07:20.086 12:34:52 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:20.086 12:34:52 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:20.086 12:34:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:20.086 12:34:52 -- accel/accel.sh@129 -- # build_accel_config 00:07:20.086 12:34:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.086 12:34:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.086 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:07:20.086 12:34:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.086 12:34:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.086 12:34:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.086 12:34:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.086 12:34:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.086 12:34:52 -- accel/accel.sh@42 -- # jq -r . 00:07:20.086 ************************************ 00:07:20.086 START TEST accel_dif_functional_tests 00:07:20.086 ************************************ 00:07:20.086 12:34:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:20.086 [2024-11-20 12:34:52.891316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.086 [2024-11-20 12:34:52.891375] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337907 ] 00:07:20.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.086 [2024-11-20 12:34:52.952248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.086 [2024-11-20 12:34:53.020444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.086 [2024-11-20 12:34:53.020563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.086 [2024-11-20 12:34:53.020565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.086 00:07:20.086 00:07:20.086 CUnit - A unit testing framework for C - Version 2.1-3 00:07:20.086 http://cunit.sourceforge.net/ 00:07:20.086 00:07:20.086 00:07:20.086 Suite: accel_dif 00:07:20.086 Test: verify: DIF generated, GUARD check ...passed 00:07:20.086 Test: verify: DIF generated, APPTAG check ...passed 00:07:20.086 Test: verify: DIF generated, REFTAG check ...passed 00:07:20.086 Test: verify: DIF not generated, GUARD check ...[2024-11-20 12:34:53.075929] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:20.086 [2024-11-20 12:34:53.075966] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:20.086 passed 00:07:20.086 Test: verify: DIF not generated, APPTAG check ...[2024-11-20 12:34:53.075999] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:20.086 [2024-11-20 12:34:53.076015] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:20.086 passed 00:07:20.086 Test: verify: DIF not generated, REFTAG check ...[2024-11-20 12:34:53.076032] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:20.086 [2024-11-20 12:34:53.076045] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:20.086 passed 00:07:20.086 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:20.086 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-20 12:34:53.076089] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:20.086 passed 00:07:20.086 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:20.086 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:20.086 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:20.086 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-20 12:34:53.076201] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:20.086 passed 00:07:20.086 Test: generate copy: DIF generated, GUARD check ...passed 00:07:20.086 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:20.086 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:20.086 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:20.086 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:20.086 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:20.086 Test: generate copy: iovecs-len validate ...[2024-11-20 12:34:53.076395] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:20.086 passed 00:07:20.086 Test: generate copy: buffer alignment validate ...passed 00:07:20.086 00:07:20.086 Run Summary: Type Total Ran Passed Failed Inactive 00:07:20.086 suites 1 1 n/a 0 0 00:07:20.086 tests 20 20 20 0 0 00:07:20.086 asserts 204 204 204 0 n/a 00:07:20.086 00:07:20.086 Elapsed time = 0.002 seconds 00:07:20.086 00:07:20.086 real 0m0.345s 00:07:20.086 user 0m0.484s 00:07:20.086 sys 0m0.124s 00:07:20.086 12:34:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.086 12:34:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.086 ************************************ 00:07:20.086 END TEST accel_dif_functional_tests 00:07:20.086 ************************************ 00:07:20.348 00:07:20.348 real 0m54.884s 00:07:20.348 user 1m3.335s 00:07:20.348 sys 0m5.762s 00:07:20.348 12:34:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.348 12:34:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.348 ************************************ 00:07:20.349 END TEST accel 00:07:20.349 ************************************ 00:07:20.349 12:34:53 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:20.349 12:34:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:20.349 12:34:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.349 12:34:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.349 ************************************ 00:07:20.349 START TEST accel_rpc 00:07:20.349 ************************************ 00:07:20.349 12:34:53 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:20.349 * Looking for test storage... 00:07:20.349 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:20.349 12:34:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:20.349 12:34:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:20.349 12:34:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:20.349 12:34:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:20.349 12:34:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:20.349 12:34:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:20.349 12:34:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:20.349 12:34:53 -- scripts/common.sh@335 -- # IFS=.-: 00:07:20.349 12:34:53 -- scripts/common.sh@335 -- # read -ra ver1 00:07:20.349 12:34:53 -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.349 12:34:53 -- scripts/common.sh@336 -- # read -ra ver2 00:07:20.349 12:34:53 -- scripts/common.sh@337 -- # local 'op=<' 00:07:20.349 12:34:53 -- scripts/common.sh@339 -- # ver1_l=2 00:07:20.349 12:34:53 -- scripts/common.sh@340 -- # ver2_l=1 00:07:20.349 12:34:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:20.349 12:34:53 -- scripts/common.sh@343 -- # case "$op" in 00:07:20.349 12:34:53 -- scripts/common.sh@344 -- # : 1 00:07:20.349 12:34:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:20.349 12:34:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.349 12:34:53 -- scripts/common.sh@364 -- # decimal 1 00:07:20.349 12:34:53 -- scripts/common.sh@352 -- # local d=1 00:07:20.611 12:34:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.611 12:34:53 -- scripts/common.sh@354 -- # echo 1 00:07:20.611 12:34:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:20.611 12:34:53 -- scripts/common.sh@365 -- # decimal 2 00:07:20.611 12:34:53 -- scripts/common.sh@352 -- # local d=2 00:07:20.611 12:34:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.611 12:34:53 -- scripts/common.sh@354 -- # echo 2 00:07:20.611 12:34:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:20.611 12:34:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:20.611 12:34:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:20.611 12:34:53 -- scripts/common.sh@367 -- # return 0 00:07:20.611 12:34:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.611 12:34:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:20.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.611 --rc genhtml_branch_coverage=1 00:07:20.611 --rc genhtml_function_coverage=1 00:07:20.611 --rc genhtml_legend=1 00:07:20.611 --rc geninfo_all_blocks=1 00:07:20.611 --rc geninfo_unexecuted_blocks=1 00:07:20.611 00:07:20.611 ' 00:07:20.611 12:34:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:20.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.611 --rc genhtml_branch_coverage=1 00:07:20.611 --rc genhtml_function_coverage=1 00:07:20.611 --rc genhtml_legend=1 00:07:20.611 --rc geninfo_all_blocks=1 00:07:20.611 --rc geninfo_unexecuted_blocks=1 00:07:20.611 00:07:20.611 ' 00:07:20.611 12:34:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:20.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.611 --rc genhtml_branch_coverage=1 00:07:20.611 --rc genhtml_function_coverage=1 00:07:20.611 --rc genhtml_legend=1 00:07:20.611 --rc geninfo_all_blocks=1 00:07:20.611 --rc geninfo_unexecuted_blocks=1 00:07:20.611 00:07:20.611 ' 00:07:20.611 12:34:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:20.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.611 --rc genhtml_branch_coverage=1 00:07:20.611 --rc genhtml_function_coverage=1 00:07:20.611 --rc genhtml_legend=1 00:07:20.611 --rc geninfo_all_blocks=1 00:07:20.611 --rc geninfo_unexecuted_blocks=1 00:07:20.611 00:07:20.611 ' 00:07:20.611 12:34:53 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:20.611 12:34:53 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=338033 00:07:20.611 12:34:53 -- accel/accel_rpc.sh@15 -- # waitforlisten 338033 00:07:20.611 12:34:53 -- common/autotest_common.sh@829 -- # '[' -z 338033 ']' 00:07:20.611 12:34:53 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:20.611 12:34:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.611 12:34:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.611 12:34:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.611 12:34:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.611 12:34:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.611 [2024-11-20 12:34:53.522075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.611 [2024-11-20 12:34:53.522149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338033 ] 00:07:20.611 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.611 [2024-11-20 12:34:53.589439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.611 [2024-11-20 12:34:53.661853] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:20.611 [2024-11-20 12:34:53.662005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.554 12:34:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.554 12:34:54 -- common/autotest_common.sh@862 -- # return 0 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:21.554 12:34:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:21.554 12:34:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.554 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 ************************************ 00:07:21.554 START TEST accel_assign_opcode 00:07:21.554 ************************************ 00:07:21.554 12:34:54 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:21.554 12:34:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.554 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 [2024-11-20 12:34:54.327932] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:21.554 12:34:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:21.554 12:34:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.554 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 [2024-11-20 12:34:54.339958] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:21.554 12:34:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:21.554 12:34:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.554 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 12:34:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:21.554 12:34:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:21.554 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@42 -- # grep software 00:07:21.554 12:34:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.554 software 00:07:21.554 00:07:21.554 real 0m0.208s 00:07:21.554 user 0m0.049s 00:07:21.554 sys 0m0.008s 00:07:21.554 12:34:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.554 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 ************************************ 00:07:21.554 END TEST accel_assign_opcode 00:07:21.554 ************************************ 00:07:21.554 12:34:54 -- accel/accel_rpc.sh@55 -- # killprocess 338033 00:07:21.554 12:34:54 -- common/autotest_common.sh@936 -- # '[' -z 338033 ']' 00:07:21.554 12:34:54 -- common/autotest_common.sh@940 -- # kill -0 338033 00:07:21.554 12:34:54 -- common/autotest_common.sh@941 -- # uname 00:07:21.554 12:34:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:21.554 12:34:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 338033 00:07:21.554 12:34:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:21.554 12:34:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:21.554 12:34:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 338033' 00:07:21.554 killing process with pid 338033 00:07:21.554 12:34:54 -- common/autotest_common.sh@955 -- # kill 338033 00:07:21.554 12:34:54 -- common/autotest_common.sh@960 -- # wait 338033 00:07:21.816 00:07:21.816 real 0m1.567s 00:07:21.816 user 0m1.647s 00:07:21.816 sys 0m0.423s 00:07:21.816 12:34:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.816 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.816 ************************************ 00:07:21.816 END TEST accel_rpc 00:07:21.816 ************************************ 00:07:21.816 12:34:54 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:21.816 12:34:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:21.816 12:34:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.816 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.816 ************************************ 00:07:21.816 START TEST app_cmdline 00:07:21.816 ************************************ 00:07:21.816 12:34:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:22.078 * Looking for test storage... 00:07:22.078 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:22.078 12:34:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:22.078 12:34:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:22.078 12:34:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:22.078 12:34:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:22.078 12:34:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:22.078 12:34:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:22.078 12:34:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:22.078 12:34:55 -- scripts/common.sh@335 -- # IFS=.-: 00:07:22.078 12:34:55 -- scripts/common.sh@335 -- # read -ra ver1 00:07:22.078 12:34:55 -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.078 12:34:55 -- scripts/common.sh@336 -- # read -ra ver2 00:07:22.078 12:34:55 -- scripts/common.sh@337 -- # local 'op=<' 00:07:22.078 12:34:55 -- scripts/common.sh@339 -- # ver1_l=2 00:07:22.078 12:34:55 -- scripts/common.sh@340 -- # ver2_l=1 00:07:22.078 12:34:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:22.078 12:34:55 -- scripts/common.sh@343 -- # case "$op" in 00:07:22.078 12:34:55 -- scripts/common.sh@344 -- # : 1 00:07:22.078 12:34:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:22.078 12:34:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.078 12:34:55 -- scripts/common.sh@364 -- # decimal 1 00:07:22.078 12:34:55 -- scripts/common.sh@352 -- # local d=1 00:07:22.078 12:34:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.078 12:34:55 -- scripts/common.sh@354 -- # echo 1 00:07:22.078 12:34:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:22.078 12:34:55 -- scripts/common.sh@365 -- # decimal 2 00:07:22.078 12:34:55 -- scripts/common.sh@352 -- # local d=2 00:07:22.078 12:34:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.078 12:34:55 -- scripts/common.sh@354 -- # echo 2 00:07:22.078 12:34:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:22.078 12:34:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:22.078 12:34:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:22.078 12:34:55 -- scripts/common.sh@367 -- # return 0 00:07:22.078 12:34:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.078 12:34:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:22.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.078 --rc genhtml_branch_coverage=1 00:07:22.078 --rc genhtml_function_coverage=1 00:07:22.078 --rc genhtml_legend=1 00:07:22.078 --rc geninfo_all_blocks=1 00:07:22.078 --rc geninfo_unexecuted_blocks=1 00:07:22.078 00:07:22.078 ' 00:07:22.078 12:34:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:22.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.078 --rc genhtml_branch_coverage=1 00:07:22.078 --rc genhtml_function_coverage=1 00:07:22.078 --rc genhtml_legend=1 00:07:22.078 --rc geninfo_all_blocks=1 00:07:22.078 --rc geninfo_unexecuted_blocks=1 00:07:22.078 00:07:22.078 ' 00:07:22.078 12:34:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:22.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.078 --rc genhtml_branch_coverage=1 00:07:22.078 --rc genhtml_function_coverage=1 00:07:22.078 --rc genhtml_legend=1 00:07:22.078 --rc geninfo_all_blocks=1 00:07:22.078 --rc geninfo_unexecuted_blocks=1 00:07:22.078 00:07:22.078 ' 00:07:22.078 12:34:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:22.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.078 --rc genhtml_branch_coverage=1 00:07:22.078 --rc genhtml_function_coverage=1 00:07:22.078 --rc genhtml_legend=1 00:07:22.078 --rc geninfo_all_blocks=1 00:07:22.078 --rc geninfo_unexecuted_blocks=1 00:07:22.078 00:07:22.078 ' 00:07:22.078 12:34:55 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:22.078 12:34:55 -- app/cmdline.sh@17 -- # spdk_tgt_pid=338409 00:07:22.078 12:34:55 -- app/cmdline.sh@18 -- # waitforlisten 338409 00:07:22.078 12:34:55 -- common/autotest_common.sh@829 -- # '[' -z 338409 ']' 00:07:22.078 12:34:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.078 12:34:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.078 12:34:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.078 12:34:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.078 12:34:55 -- common/autotest_common.sh@10 -- # set +x 00:07:22.078 12:34:55 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:22.078 [2024-11-20 12:34:55.116376] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.078 [2024-11-20 12:34:55.116450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338409 ] 00:07:22.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.078 [2024-11-20 12:34:55.181290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.340 [2024-11-20 12:34:55.253484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.340 [2024-11-20 12:34:55.253614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.911 12:34:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.911 12:34:55 -- common/autotest_common.sh@862 -- # return 0 00:07:22.911 12:34:55 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:23.172 { 00:07:23.172 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:23.172 "fields": { 00:07:23.172 "major": 24, 00:07:23.172 "minor": 1, 00:07:23.172 "patch": 1, 00:07:23.172 "suffix": "-pre", 00:07:23.172 "commit": "c13c99a5e" 00:07:23.172 } 00:07:23.172 } 00:07:23.172 12:34:56 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:23.172 12:34:56 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:23.172 12:34:56 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:23.172 12:34:56 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:23.172 12:34:56 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:23.172 12:34:56 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:23.172 12:34:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.172 12:34:56 -- app/cmdline.sh@26 -- # sort 00:07:23.172 12:34:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.172 12:34:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.172 12:34:56 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:23.172 12:34:56 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:23.172 12:34:56 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.172 12:34:56 -- common/autotest_common.sh@650 -- # local es=0 00:07:23.172 12:34:56 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.172 12:34:56 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:23.172 12:34:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.172 12:34:56 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:23.172 12:34:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.172 12:34:56 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:23.172 12:34:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.172 12:34:56 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:23.172 12:34:56 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:23.172 12:34:56 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.172 request: 00:07:23.172 { 00:07:23.172 "method": "env_dpdk_get_mem_stats", 00:07:23.172 "req_id": 1 00:07:23.172 } 00:07:23.172 Got JSON-RPC error response 00:07:23.172 response: 00:07:23.172 { 00:07:23.172 "code": -32601, 00:07:23.172 "message": "Method not found" 00:07:23.172 } 00:07:23.172 12:34:56 -- common/autotest_common.sh@653 -- # es=1 00:07:23.172 12:34:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.172 12:34:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.172 12:34:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.172 12:34:56 -- app/cmdline.sh@1 -- # killprocess 338409 00:07:23.172 12:34:56 -- common/autotest_common.sh@936 -- # '[' -z 338409 ']' 00:07:23.172 12:34:56 -- common/autotest_common.sh@940 -- # kill -0 338409 00:07:23.172 12:34:56 -- common/autotest_common.sh@941 -- # uname 00:07:23.173 12:34:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:23.173 12:34:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 338409 00:07:23.433 12:34:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:23.433 12:34:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:23.433 12:34:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 338409' 00:07:23.433 killing process with pid 338409 00:07:23.433 12:34:56 -- common/autotest_common.sh@955 -- # kill 338409 00:07:23.433 12:34:56 -- common/autotest_common.sh@960 -- # wait 338409 00:07:23.433 00:07:23.433 real 0m1.647s 00:07:23.433 user 0m1.946s 00:07:23.433 sys 0m0.427s 00:07:23.433 12:34:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.433 12:34:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.433 ************************************ 00:07:23.433 END TEST app_cmdline 00:07:23.433 ************************************ 00:07:23.695 12:34:56 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:23.695 12:34:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.695 12:34:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.695 12:34:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.695 ************************************ 00:07:23.695 START TEST version 00:07:23.695 ************************************ 00:07:23.695 12:34:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:23.695 * Looking for test storage... 00:07:23.695 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:23.695 12:34:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:23.695 12:34:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:23.695 12:34:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:23.695 12:34:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:23.695 12:34:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:23.695 12:34:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:23.695 12:34:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:23.695 12:34:56 -- scripts/common.sh@335 -- # IFS=.-: 00:07:23.695 12:34:56 -- scripts/common.sh@335 -- # read -ra ver1 00:07:23.695 12:34:56 -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.695 12:34:56 -- scripts/common.sh@336 -- # read -ra ver2 00:07:23.695 12:34:56 -- scripts/common.sh@337 -- # local 'op=<' 00:07:23.695 12:34:56 -- scripts/common.sh@339 -- # ver1_l=2 00:07:23.695 12:34:56 -- scripts/common.sh@340 -- # ver2_l=1 00:07:23.695 12:34:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:23.695 12:34:56 -- scripts/common.sh@343 -- # case "$op" in 00:07:23.695 12:34:56 -- scripts/common.sh@344 -- # : 1 00:07:23.695 12:34:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:23.695 12:34:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.695 12:34:56 -- scripts/common.sh@364 -- # decimal 1 00:07:23.695 12:34:56 -- scripts/common.sh@352 -- # local d=1 00:07:23.695 12:34:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.695 12:34:56 -- scripts/common.sh@354 -- # echo 1 00:07:23.695 12:34:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:23.695 12:34:56 -- scripts/common.sh@365 -- # decimal 2 00:07:23.695 12:34:56 -- scripts/common.sh@352 -- # local d=2 00:07:23.695 12:34:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.695 12:34:56 -- scripts/common.sh@354 -- # echo 2 00:07:23.695 12:34:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:23.695 12:34:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:23.695 12:34:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:23.695 12:34:56 -- scripts/common.sh@367 -- # return 0 00:07:23.695 12:34:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.695 12:34:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:23.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.695 --rc genhtml_branch_coverage=1 00:07:23.695 --rc genhtml_function_coverage=1 00:07:23.695 --rc genhtml_legend=1 00:07:23.695 --rc geninfo_all_blocks=1 00:07:23.695 --rc geninfo_unexecuted_blocks=1 00:07:23.695 00:07:23.695 ' 00:07:23.695 12:34:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:23.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.695 --rc genhtml_branch_coverage=1 00:07:23.695 --rc genhtml_function_coverage=1 00:07:23.695 --rc genhtml_legend=1 00:07:23.695 --rc geninfo_all_blocks=1 00:07:23.695 --rc geninfo_unexecuted_blocks=1 00:07:23.695 00:07:23.695 ' 00:07:23.695 12:34:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:23.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.695 --rc genhtml_branch_coverage=1 00:07:23.695 --rc genhtml_function_coverage=1 00:07:23.695 --rc genhtml_legend=1 00:07:23.695 --rc geninfo_all_blocks=1 00:07:23.695 --rc geninfo_unexecuted_blocks=1 00:07:23.695 00:07:23.695 ' 00:07:23.695 12:34:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:23.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.695 --rc genhtml_branch_coverage=1 00:07:23.695 --rc genhtml_function_coverage=1 00:07:23.695 --rc genhtml_legend=1 00:07:23.695 --rc geninfo_all_blocks=1 00:07:23.695 --rc geninfo_unexecuted_blocks=1 00:07:23.695 00:07:23.695 ' 00:07:23.695 12:34:56 -- app/version.sh@17 -- # get_header_version major 00:07:23.695 12:34:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:23.695 12:34:56 -- app/version.sh@14 -- # cut -f2 00:07:23.695 12:34:56 -- app/version.sh@14 -- # tr -d '"' 00:07:23.695 12:34:56 -- app/version.sh@17 -- # major=24 00:07:23.695 12:34:56 -- app/version.sh@18 -- # get_header_version minor 00:07:23.695 12:34:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:23.695 12:34:56 -- app/version.sh@14 -- # cut -f2 00:07:23.695 12:34:56 -- app/version.sh@14 -- # tr -d '"' 00:07:23.695 12:34:56 -- app/version.sh@18 -- # minor=1 00:07:23.695 12:34:56 -- app/version.sh@19 -- # get_header_version patch 00:07:23.696 12:34:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:23.696 12:34:56 -- app/version.sh@14 -- # cut -f2 00:07:23.696 12:34:56 -- app/version.sh@14 -- # tr -d '"' 00:07:23.696 12:34:56 -- app/version.sh@19 -- # patch=1 00:07:23.696 12:34:56 -- app/version.sh@20 -- # get_header_version suffix 00:07:23.696 12:34:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:23.696 12:34:56 -- app/version.sh@14 -- # cut -f2 00:07:23.696 12:34:56 -- app/version.sh@14 -- # tr -d '"' 00:07:23.696 12:34:56 -- app/version.sh@20 -- # suffix=-pre 00:07:23.696 12:34:56 -- app/version.sh@22 -- # version=24.1 00:07:23.696 12:34:56 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:23.696 12:34:56 -- app/version.sh@25 -- # version=24.1.1 00:07:23.696 12:34:56 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:23.696 12:34:56 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:23.958 12:34:56 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:23.958 12:34:56 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:23.958 12:34:56 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:23.958 00:07:23.958 real 0m0.272s 00:07:23.958 user 0m0.156s 00:07:23.958 sys 0m0.161s 00:07:23.958 12:34:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.958 12:34:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.958 ************************************ 00:07:23.958 END TEST version 00:07:23.958 ************************************ 00:07:23.958 12:34:56 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:23.958 12:34:56 -- spdk/autotest.sh@191 -- # uname -s 00:07:23.958 12:34:56 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:23.958 12:34:56 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:23.958 12:34:56 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:23.958 12:34:56 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:23.958 12:34:56 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:23.958 12:34:56 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:23.958 12:34:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.958 12:34:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.958 12:34:56 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:23.958 12:34:56 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:23.958 12:34:56 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:23.958 12:34:56 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:23.958 12:34:56 -- spdk/autotest.sh@278 -- # '[' rdma = rdma ']' 00:07:23.958 12:34:56 -- spdk/autotest.sh@279 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:23.958 12:34:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:23.958 12:34:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.958 12:34:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.958 ************************************ 00:07:23.958 START TEST nvmf_rdma 00:07:23.958 ************************************ 00:07:23.958 12:34:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:23.958 * Looking for test storage... 00:07:23.958 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:23.958 12:34:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:23.958 12:34:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:23.958 12:34:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:24.220 12:34:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:24.220 12:34:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:24.220 12:34:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:24.220 12:34:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:24.220 12:34:57 -- scripts/common.sh@335 -- # IFS=.-: 00:07:24.220 12:34:57 -- scripts/common.sh@335 -- # read -ra ver1 00:07:24.220 12:34:57 -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.220 12:34:57 -- scripts/common.sh@336 -- # read -ra ver2 00:07:24.220 12:34:57 -- scripts/common.sh@337 -- # local 'op=<' 00:07:24.220 12:34:57 -- scripts/common.sh@339 -- # ver1_l=2 00:07:24.220 12:34:57 -- scripts/common.sh@340 -- # ver2_l=1 00:07:24.220 12:34:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:24.220 12:34:57 -- scripts/common.sh@343 -- # case "$op" in 00:07:24.220 12:34:57 -- scripts/common.sh@344 -- # : 1 00:07:24.220 12:34:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:24.220 12:34:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.220 12:34:57 -- scripts/common.sh@364 -- # decimal 1 00:07:24.220 12:34:57 -- scripts/common.sh@352 -- # local d=1 00:07:24.220 12:34:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.220 12:34:57 -- scripts/common.sh@354 -- # echo 1 00:07:24.220 12:34:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:24.220 12:34:57 -- scripts/common.sh@365 -- # decimal 2 00:07:24.220 12:34:57 -- scripts/common.sh@352 -- # local d=2 00:07:24.220 12:34:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.220 12:34:57 -- scripts/common.sh@354 -- # echo 2 00:07:24.220 12:34:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:24.220 12:34:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:24.221 12:34:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:24.221 12:34:57 -- scripts/common.sh@367 -- # return 0 00:07:24.221 12:34:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.221 12:34:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:24.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.221 --rc genhtml_branch_coverage=1 00:07:24.221 --rc genhtml_function_coverage=1 00:07:24.221 --rc genhtml_legend=1 00:07:24.221 --rc geninfo_all_blocks=1 00:07:24.221 --rc geninfo_unexecuted_blocks=1 00:07:24.221 00:07:24.221 ' 00:07:24.221 12:34:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:24.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.221 --rc genhtml_branch_coverage=1 00:07:24.221 --rc genhtml_function_coverage=1 00:07:24.221 --rc genhtml_legend=1 00:07:24.221 --rc geninfo_all_blocks=1 00:07:24.221 --rc geninfo_unexecuted_blocks=1 00:07:24.221 00:07:24.221 ' 00:07:24.221 12:34:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:24.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.221 --rc genhtml_branch_coverage=1 00:07:24.221 --rc genhtml_function_coverage=1 00:07:24.221 --rc genhtml_legend=1 00:07:24.221 --rc geninfo_all_blocks=1 00:07:24.221 --rc geninfo_unexecuted_blocks=1 00:07:24.221 00:07:24.221 ' 00:07:24.221 12:34:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:24.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.221 --rc genhtml_branch_coverage=1 00:07:24.221 --rc genhtml_function_coverage=1 00:07:24.221 --rc genhtml_legend=1 00:07:24.221 --rc geninfo_all_blocks=1 00:07:24.221 --rc geninfo_unexecuted_blocks=1 00:07:24.221 00:07:24.221 ' 00:07:24.221 12:34:57 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:24.221 12:34:57 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:24.221 12:34:57 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.221 12:34:57 -- nvmf/common.sh@7 -- # uname -s 00:07:24.221 12:34:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.221 12:34:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.221 12:34:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.221 12:34:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.221 12:34:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.221 12:34:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.221 12:34:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.221 12:34:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.221 12:34:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.221 12:34:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.221 12:34:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:24.221 12:34:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:24.221 12:34:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.221 12:34:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.221 12:34:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.221 12:34:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:24.221 12:34:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.221 12:34:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.221 12:34:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.221 12:34:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.221 12:34:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.221 12:34:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.221 12:34:57 -- paths/export.sh@5 -- # export PATH 00:07:24.221 12:34:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.221 12:34:57 -- nvmf/common.sh@46 -- # : 0 00:07:24.221 12:34:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:24.221 12:34:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:24.221 12:34:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:24.221 12:34:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.221 12:34:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.221 12:34:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:24.221 12:34:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:24.221 12:34:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:24.221 12:34:57 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:24.221 12:34:57 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:24.221 12:34:57 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:24.221 12:34:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.221 12:34:57 -- common/autotest_common.sh@10 -- # set +x 00:07:24.221 12:34:57 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:24.221 12:34:57 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:24.221 12:34:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:24.221 12:34:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.221 12:34:57 -- common/autotest_common.sh@10 -- # set +x 00:07:24.221 ************************************ 00:07:24.221 START TEST nvmf_example 00:07:24.221 ************************************ 00:07:24.221 12:34:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:24.221 * Looking for test storage... 00:07:24.221 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:24.221 12:34:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:24.221 12:34:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:24.221 12:34:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:24.484 12:34:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:24.484 12:34:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:24.484 12:34:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:24.484 12:34:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:24.484 12:34:57 -- scripts/common.sh@335 -- # IFS=.-: 00:07:24.484 12:34:57 -- scripts/common.sh@335 -- # read -ra ver1 00:07:24.484 12:34:57 -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.484 12:34:57 -- scripts/common.sh@336 -- # read -ra ver2 00:07:24.484 12:34:57 -- scripts/common.sh@337 -- # local 'op=<' 00:07:24.484 12:34:57 -- scripts/common.sh@339 -- # ver1_l=2 00:07:24.484 12:34:57 -- scripts/common.sh@340 -- # ver2_l=1 00:07:24.484 12:34:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:24.484 12:34:57 -- scripts/common.sh@343 -- # case "$op" in 00:07:24.484 12:34:57 -- scripts/common.sh@344 -- # : 1 00:07:24.484 12:34:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:24.484 12:34:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.484 12:34:57 -- scripts/common.sh@364 -- # decimal 1 00:07:24.484 12:34:57 -- scripts/common.sh@352 -- # local d=1 00:07:24.484 12:34:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.484 12:34:57 -- scripts/common.sh@354 -- # echo 1 00:07:24.484 12:34:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:24.484 12:34:57 -- scripts/common.sh@365 -- # decimal 2 00:07:24.484 12:34:57 -- scripts/common.sh@352 -- # local d=2 00:07:24.484 12:34:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.484 12:34:57 -- scripts/common.sh@354 -- # echo 2 00:07:24.484 12:34:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:24.484 12:34:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:24.484 12:34:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:24.484 12:34:57 -- scripts/common.sh@367 -- # return 0 00:07:24.484 12:34:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.484 12:34:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.484 --rc genhtml_branch_coverage=1 00:07:24.484 --rc genhtml_function_coverage=1 00:07:24.484 --rc genhtml_legend=1 00:07:24.484 --rc geninfo_all_blocks=1 00:07:24.484 --rc geninfo_unexecuted_blocks=1 00:07:24.484 00:07:24.484 ' 00:07:24.484 12:34:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.484 --rc genhtml_branch_coverage=1 00:07:24.484 --rc genhtml_function_coverage=1 00:07:24.484 --rc genhtml_legend=1 00:07:24.484 --rc geninfo_all_blocks=1 00:07:24.484 --rc geninfo_unexecuted_blocks=1 00:07:24.484 00:07:24.484 ' 00:07:24.484 12:34:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.484 --rc genhtml_branch_coverage=1 00:07:24.484 --rc genhtml_function_coverage=1 00:07:24.484 --rc genhtml_legend=1 00:07:24.484 --rc geninfo_all_blocks=1 00:07:24.484 --rc geninfo_unexecuted_blocks=1 00:07:24.484 00:07:24.484 ' 00:07:24.484 12:34:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.484 --rc genhtml_branch_coverage=1 00:07:24.484 --rc genhtml_function_coverage=1 00:07:24.484 --rc genhtml_legend=1 00:07:24.484 --rc geninfo_all_blocks=1 00:07:24.484 --rc geninfo_unexecuted_blocks=1 00:07:24.484 00:07:24.484 ' 00:07:24.484 12:34:57 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.484 12:34:57 -- nvmf/common.sh@7 -- # uname -s 00:07:24.484 12:34:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.484 12:34:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.484 12:34:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.484 12:34:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.484 12:34:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.484 12:34:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.484 12:34:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.484 12:34:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.484 12:34:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.484 12:34:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.484 12:34:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:24.484 12:34:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:24.484 12:34:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.484 12:34:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.484 12:34:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.484 12:34:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:24.484 12:34:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.484 12:34:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.484 12:34:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.484 12:34:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.484 12:34:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.484 12:34:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.484 12:34:57 -- paths/export.sh@5 -- # export PATH 00:07:24.485 12:34:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.485 12:34:57 -- nvmf/common.sh@46 -- # : 0 00:07:24.485 12:34:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:24.485 12:34:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:24.485 12:34:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:24.485 12:34:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.485 12:34:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.485 12:34:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:24.485 12:34:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:24.485 12:34:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:24.485 12:34:57 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:24.485 12:34:57 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:24.485 12:34:57 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:24.485 12:34:57 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:24.485 12:34:57 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:24.485 12:34:57 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:24.485 12:34:57 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:24.485 12:34:57 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:24.485 12:34:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.485 12:34:57 -- common/autotest_common.sh@10 -- # set +x 00:07:24.485 12:34:57 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:24.485 12:34:57 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:24.485 12:34:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.485 12:34:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:24.485 12:34:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:24.485 12:34:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:24.485 12:34:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.485 12:34:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.485 12:34:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.485 12:34:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:24.485 12:34:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:24.485 12:34:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:24.485 12:34:57 -- common/autotest_common.sh@10 -- # set +x 00:07:32.627 12:35:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:32.627 12:35:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:32.627 12:35:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:32.627 12:35:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:32.627 12:35:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:32.627 12:35:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:32.627 12:35:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:32.627 12:35:04 -- nvmf/common.sh@294 -- # net_devs=() 00:07:32.627 12:35:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:32.627 12:35:04 -- nvmf/common.sh@295 -- # e810=() 00:07:32.627 12:35:04 -- nvmf/common.sh@295 -- # local -ga e810 00:07:32.627 12:35:04 -- nvmf/common.sh@296 -- # x722=() 00:07:32.627 12:35:04 -- nvmf/common.sh@296 -- # local -ga x722 00:07:32.627 12:35:04 -- nvmf/common.sh@297 -- # mlx=() 00:07:32.627 12:35:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:32.627 12:35:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.627 12:35:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:32.627 12:35:04 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:32.627 12:35:04 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:32.627 12:35:04 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:07:32.627 12:35:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:32.627 12:35:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:32.627 12:35:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:07:32.627 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:07:32.627 12:35:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:32.627 12:35:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:32.627 12:35:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:07:32.627 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:07:32.627 12:35:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:32.627 12:35:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:32.627 12:35:04 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:07:32.627 12:35:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:32.627 12:35:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.627 12:35:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:32.627 12:35:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.627 12:35:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:07:32.627 Found net devices under 0000:98:00.0: mlx_0_0 00:07:32.627 12:35:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.627 12:35:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.628 12:35:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:32.628 12:35:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.628 12:35:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:07:32.628 Found net devices under 0000:98:00.1: mlx_0_1 00:07:32.628 12:35:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.628 12:35:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:32.628 12:35:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:32.628 12:35:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:32.628 12:35:04 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:32.628 12:35:04 -- nvmf/common.sh@57 -- # uname 00:07:32.628 12:35:04 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:32.628 12:35:04 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:32.628 12:35:04 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:32.628 12:35:04 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:32.628 12:35:04 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:32.628 12:35:04 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:32.628 12:35:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:32.628 12:35:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:32.628 12:35:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:32.628 12:35:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:32.628 12:35:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:32.628 12:35:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:32.628 12:35:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:32.628 12:35:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:32.628 12:35:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:32.628 12:35:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:32.628 12:35:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:32.628 12:35:04 -- nvmf/common.sh@104 -- # continue 2 00:07:32.628 12:35:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:32.628 12:35:04 -- nvmf/common.sh@104 -- # continue 2 00:07:32.628 12:35:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:32.628 12:35:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:07:32.628 12:35:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:32.628 12:35:04 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:32.628 12:35:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:07:32.628 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:32.628 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:07:32.628 altname enp152s0f0np0 00:07:32.628 altname ens817f0np0 00:07:32.628 inet 192.168.100.8/24 scope global mlx_0_0 00:07:32.628 valid_lft forever preferred_lft forever 00:07:32.628 12:35:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:32.628 12:35:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:07:32.628 12:35:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:32.628 12:35:04 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:32.628 12:35:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:07:32.628 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:32.628 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:07:32.628 altname enp152s0f1np1 00:07:32.628 altname ens817f1np1 00:07:32.628 inet 192.168.100.9/24 scope global mlx_0_1 00:07:32.628 valid_lft forever preferred_lft forever 00:07:32.628 12:35:04 -- nvmf/common.sh@410 -- # return 0 00:07:32.628 12:35:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:32.628 12:35:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:32.628 12:35:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:32.628 12:35:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:32.628 12:35:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:32.628 12:35:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:32.628 12:35:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:32.628 12:35:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:32.628 12:35:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:32.628 12:35:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:32.628 12:35:04 -- nvmf/common.sh@104 -- # continue 2 00:07:32.628 12:35:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.628 12:35:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:32.628 12:35:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:32.628 12:35:04 -- nvmf/common.sh@104 -- # continue 2 00:07:32.628 12:35:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:32.628 12:35:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:07:32.628 12:35:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:32.628 12:35:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:32.628 12:35:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:07:32.628 12:35:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:32.628 12:35:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:32.628 12:35:04 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:32.628 192.168.100.9' 00:07:32.628 12:35:04 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:32.628 192.168.100.9' 00:07:32.628 12:35:04 -- nvmf/common.sh@445 -- # head -n 1 00:07:32.628 12:35:04 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:32.628 12:35:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:32.628 192.168.100.9' 00:07:32.628 12:35:04 -- nvmf/common.sh@446 -- # tail -n +2 00:07:32.628 12:35:04 -- nvmf/common.sh@446 -- # head -n 1 00:07:32.628 12:35:04 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:32.628 12:35:04 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:32.628 12:35:04 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:32.628 12:35:04 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:32.628 12:35:04 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:32.628 12:35:04 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:32.628 12:35:04 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:32.628 12:35:04 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:32.628 12:35:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.628 12:35:04 -- common/autotest_common.sh@10 -- # set +x 00:07:32.628 12:35:04 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:32.628 12:35:04 -- target/nvmf_example.sh@34 -- # nvmfpid=342897 00:07:32.628 12:35:04 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.628 12:35:04 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:32.628 12:35:04 -- target/nvmf_example.sh@36 -- # waitforlisten 342897 00:07:32.628 12:35:04 -- common/autotest_common.sh@829 -- # '[' -z 342897 ']' 00:07:32.628 12:35:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.628 12:35:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.628 12:35:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.628 12:35:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.628 12:35:04 -- common/autotest_common.sh@10 -- # set +x 00:07:32.628 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.628 12:35:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.628 12:35:05 -- common/autotest_common.sh@862 -- # return 0 00:07:32.628 12:35:05 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:32.628 12:35:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.628 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:32.628 12:35:05 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:32.628 12:35:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.628 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:32.628 12:35:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.628 12:35:05 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:32.628 12:35:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.628 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:32.628 12:35:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.628 12:35:05 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:32.628 12:35:05 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:32.628 12:35:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.629 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:32.629 12:35:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.629 12:35:05 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:32.629 12:35:05 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:32.629 12:35:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.629 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:32.890 12:35:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.890 12:35:05 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:32.890 12:35:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.890 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:32.890 12:35:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.890 12:35:05 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:32.890 12:35:05 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:32.890 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.161 Initializing NVMe Controllers 00:07:45.161 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.161 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:45.161 Initialization complete. Launching workers. 00:07:45.161 ======================================================== 00:07:45.161 Latency(us) 00:07:45.161 Device Information : IOPS MiB/s Average min max 00:07:45.161 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26335.10 102.87 2431.55 664.91 20003.29 00:07:45.161 ======================================================== 00:07:45.161 Total : 26335.10 102.87 2431.55 664.91 20003.29 00:07:45.161 00:07:45.161 12:35:17 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:45.161 12:35:17 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:45.161 12:35:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:45.161 12:35:17 -- nvmf/common.sh@116 -- # sync 00:07:45.161 12:35:17 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:07:45.161 12:35:17 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:07:45.161 12:35:17 -- nvmf/common.sh@119 -- # set +e 00:07:45.161 12:35:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:45.161 12:35:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:07:45.161 rmmod nvme_rdma 00:07:45.161 rmmod nvme_fabrics 00:07:45.161 12:35:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:45.161 12:35:17 -- nvmf/common.sh@123 -- # set -e 00:07:45.161 12:35:17 -- nvmf/common.sh@124 -- # return 0 00:07:45.161 12:35:17 -- nvmf/common.sh@477 -- # '[' -n 342897 ']' 00:07:45.161 12:35:17 -- nvmf/common.sh@478 -- # killprocess 342897 00:07:45.161 12:35:17 -- common/autotest_common.sh@936 -- # '[' -z 342897 ']' 00:07:45.161 12:35:17 -- common/autotest_common.sh@940 -- # kill -0 342897 00:07:45.161 12:35:17 -- common/autotest_common.sh@941 -- # uname 00:07:45.161 12:35:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:45.161 12:35:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 342897 00:07:45.161 12:35:17 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:45.161 12:35:17 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:45.161 12:35:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 342897' 00:07:45.161 killing process with pid 342897 00:07:45.161 12:35:17 -- common/autotest_common.sh@955 -- # kill 342897 00:07:45.161 12:35:17 -- common/autotest_common.sh@960 -- # wait 342897 00:07:45.161 nvmf threads initialize successfully 00:07:45.161 bdev subsystem init successfully 00:07:45.161 created a nvmf target service 00:07:45.161 create targets's poll groups done 00:07:45.161 all subsystems of target started 00:07:45.161 nvmf target is running 00:07:45.161 all subsystems of target stopped 00:07:45.161 destroy targets's poll groups done 00:07:45.161 destroyed the nvmf target service 00:07:45.161 bdev subsystem finish successfully 00:07:45.161 nvmf threads destroy successfully 00:07:45.161 12:35:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:45.161 12:35:17 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:07:45.161 12:35:17 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:45.161 12:35:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:45.161 12:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:45.161 00:07:45.161 real 0m20.186s 00:07:45.161 user 0m52.391s 00:07:45.161 sys 0m5.840s 00:07:45.161 12:35:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.161 12:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:45.161 ************************************ 00:07:45.161 END TEST nvmf_example 00:07:45.161 ************************************ 00:07:45.161 12:35:17 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:45.161 12:35:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:45.161 12:35:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.161 12:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:45.161 ************************************ 00:07:45.161 START TEST nvmf_filesystem 00:07:45.161 ************************************ 00:07:45.161 12:35:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:45.161 * Looking for test storage... 00:07:45.161 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.161 12:35:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:45.161 12:35:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:45.161 12:35:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:45.161 12:35:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:45.161 12:35:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:45.161 12:35:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:45.161 12:35:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:45.161 12:35:17 -- scripts/common.sh@335 -- # IFS=.-: 00:07:45.161 12:35:17 -- scripts/common.sh@335 -- # read -ra ver1 00:07:45.161 12:35:17 -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.161 12:35:17 -- scripts/common.sh@336 -- # read -ra ver2 00:07:45.161 12:35:17 -- scripts/common.sh@337 -- # local 'op=<' 00:07:45.161 12:35:17 -- scripts/common.sh@339 -- # ver1_l=2 00:07:45.161 12:35:17 -- scripts/common.sh@340 -- # ver2_l=1 00:07:45.161 12:35:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:45.161 12:35:17 -- scripts/common.sh@343 -- # case "$op" in 00:07:45.161 12:35:17 -- scripts/common.sh@344 -- # : 1 00:07:45.161 12:35:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:45.161 12:35:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.161 12:35:17 -- scripts/common.sh@364 -- # decimal 1 00:07:45.161 12:35:17 -- scripts/common.sh@352 -- # local d=1 00:07:45.161 12:35:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.161 12:35:17 -- scripts/common.sh@354 -- # echo 1 00:07:45.161 12:35:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:45.161 12:35:17 -- scripts/common.sh@365 -- # decimal 2 00:07:45.161 12:35:17 -- scripts/common.sh@352 -- # local d=2 00:07:45.161 12:35:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.162 12:35:17 -- scripts/common.sh@354 -- # echo 2 00:07:45.162 12:35:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:45.162 12:35:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:45.162 12:35:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:45.162 12:35:17 -- scripts/common.sh@367 -- # return 0 00:07:45.162 12:35:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.162 12:35:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:45.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.162 --rc genhtml_branch_coverage=1 00:07:45.162 --rc genhtml_function_coverage=1 00:07:45.162 --rc genhtml_legend=1 00:07:45.162 --rc geninfo_all_blocks=1 00:07:45.162 --rc geninfo_unexecuted_blocks=1 00:07:45.162 00:07:45.162 ' 00:07:45.162 12:35:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:45.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.162 --rc genhtml_branch_coverage=1 00:07:45.162 --rc genhtml_function_coverage=1 00:07:45.162 --rc genhtml_legend=1 00:07:45.162 --rc geninfo_all_blocks=1 00:07:45.162 --rc geninfo_unexecuted_blocks=1 00:07:45.162 00:07:45.162 ' 00:07:45.162 12:35:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:45.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.162 --rc genhtml_branch_coverage=1 00:07:45.162 --rc genhtml_function_coverage=1 00:07:45.162 --rc genhtml_legend=1 00:07:45.162 --rc geninfo_all_blocks=1 00:07:45.162 --rc geninfo_unexecuted_blocks=1 00:07:45.162 00:07:45.162 ' 00:07:45.162 12:35:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:45.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.162 --rc genhtml_branch_coverage=1 00:07:45.162 --rc genhtml_function_coverage=1 00:07:45.162 --rc genhtml_legend=1 00:07:45.162 --rc geninfo_all_blocks=1 00:07:45.162 --rc geninfo_unexecuted_blocks=1 00:07:45.162 00:07:45.162 ' 00:07:45.162 12:35:17 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:45.162 12:35:17 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:45.162 12:35:17 -- common/autotest_common.sh@34 -- # set -e 00:07:45.162 12:35:17 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:45.162 12:35:17 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:45.162 12:35:17 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:45.162 12:35:17 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:45.162 12:35:17 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:45.162 12:35:17 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:45.162 12:35:17 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:45.162 12:35:17 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:45.162 12:35:17 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:45.162 12:35:17 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:45.162 12:35:17 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:45.162 12:35:17 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:45.162 12:35:17 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:45.162 12:35:17 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:45.162 12:35:17 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:45.162 12:35:17 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:45.162 12:35:17 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:45.162 12:35:17 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:45.162 12:35:17 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:45.162 12:35:17 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:45.162 12:35:17 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:45.162 12:35:17 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:45.162 12:35:17 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:45.162 12:35:17 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:45.162 12:35:17 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:45.162 12:35:17 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:45.162 12:35:17 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:45.162 12:35:17 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:45.162 12:35:17 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:45.162 12:35:17 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:45.162 12:35:17 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:45.162 12:35:17 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:45.162 12:35:17 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:45.162 12:35:17 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:45.162 12:35:17 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:45.162 12:35:17 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:45.162 12:35:17 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:45.162 12:35:17 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:45.162 12:35:17 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:45.162 12:35:17 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:45.162 12:35:17 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:45.162 12:35:17 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:45.162 12:35:17 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:45.162 12:35:17 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:45.162 12:35:17 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:45.162 12:35:17 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:45.162 12:35:17 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:45.162 12:35:17 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:45.162 12:35:17 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:45.162 12:35:17 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:45.162 12:35:17 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:45.162 12:35:17 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:45.162 12:35:17 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:45.162 12:35:17 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:45.162 12:35:17 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:45.162 12:35:17 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:45.162 12:35:17 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:45.162 12:35:17 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:45.162 12:35:17 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:45.162 12:35:17 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:45.162 12:35:17 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:45.162 12:35:17 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:45.162 12:35:17 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:45.162 12:35:17 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:45.162 12:35:17 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:45.162 12:35:17 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:45.162 12:35:17 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:45.162 12:35:17 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:45.162 12:35:17 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:45.162 12:35:17 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:45.162 12:35:17 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:45.162 12:35:17 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:45.162 12:35:17 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:45.162 12:35:17 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:45.162 12:35:17 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:45.162 12:35:17 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:45.162 12:35:17 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:45.162 12:35:17 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:45.162 12:35:17 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:45.162 12:35:17 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:45.162 12:35:17 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:45.162 12:35:17 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:45.162 12:35:17 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:45.162 12:35:17 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:45.162 12:35:17 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:45.162 12:35:17 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:45.162 12:35:17 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:45.162 12:35:17 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:45.162 12:35:17 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:45.162 12:35:17 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:45.162 12:35:17 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:45.162 12:35:17 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:45.162 12:35:17 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:45.162 12:35:17 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:45.162 12:35:17 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:45.162 12:35:17 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:45.162 12:35:17 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:45.162 12:35:17 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:45.162 12:35:17 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:45.162 #define SPDK_CONFIG_H 00:07:45.162 #define SPDK_CONFIG_APPS 1 00:07:45.162 #define SPDK_CONFIG_ARCH native 00:07:45.162 #undef SPDK_CONFIG_ASAN 00:07:45.162 #undef SPDK_CONFIG_AVAHI 00:07:45.162 #undef SPDK_CONFIG_CET 00:07:45.162 #define SPDK_CONFIG_COVERAGE 1 00:07:45.162 #define SPDK_CONFIG_CROSS_PREFIX 00:07:45.162 #undef SPDK_CONFIG_CRYPTO 00:07:45.162 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:45.163 #undef SPDK_CONFIG_CUSTOMOCF 00:07:45.163 #undef SPDK_CONFIG_DAOS 00:07:45.163 #define SPDK_CONFIG_DAOS_DIR 00:07:45.163 #define SPDK_CONFIG_DEBUG 1 00:07:45.163 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:45.163 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:45.163 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:45.163 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:45.163 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:45.163 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:45.163 #define SPDK_CONFIG_EXAMPLES 1 00:07:45.163 #undef SPDK_CONFIG_FC 00:07:45.163 #define SPDK_CONFIG_FC_PATH 00:07:45.163 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:45.163 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:45.163 #undef SPDK_CONFIG_FUSE 00:07:45.163 #undef SPDK_CONFIG_FUZZER 00:07:45.163 #define SPDK_CONFIG_FUZZER_LIB 00:07:45.163 #undef SPDK_CONFIG_GOLANG 00:07:45.163 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:45.163 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:45.163 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:45.163 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:45.163 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:45.163 #define SPDK_CONFIG_IDXD 1 00:07:45.163 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:45.163 #undef SPDK_CONFIG_IPSEC_MB 00:07:45.163 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:45.163 #define SPDK_CONFIG_ISAL 1 00:07:45.163 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:45.163 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:45.163 #define SPDK_CONFIG_LIBDIR 00:07:45.163 #undef SPDK_CONFIG_LTO 00:07:45.163 #define SPDK_CONFIG_MAX_LCORES 00:07:45.163 #define SPDK_CONFIG_NVME_CUSE 1 00:07:45.163 #undef SPDK_CONFIG_OCF 00:07:45.163 #define SPDK_CONFIG_OCF_PATH 00:07:45.163 #define SPDK_CONFIG_OPENSSL_PATH 00:07:45.163 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:45.163 #undef SPDK_CONFIG_PGO_USE 00:07:45.163 #define SPDK_CONFIG_PREFIX /usr/local 00:07:45.163 #undef SPDK_CONFIG_RAID5F 00:07:45.163 #undef SPDK_CONFIG_RBD 00:07:45.163 #define SPDK_CONFIG_RDMA 1 00:07:45.163 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:45.163 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:45.163 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:45.163 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:45.163 #define SPDK_CONFIG_SHARED 1 00:07:45.163 #undef SPDK_CONFIG_SMA 00:07:45.163 #define SPDK_CONFIG_TESTS 1 00:07:45.163 #undef SPDK_CONFIG_TSAN 00:07:45.163 #define SPDK_CONFIG_UBLK 1 00:07:45.163 #define SPDK_CONFIG_UBSAN 1 00:07:45.163 #undef SPDK_CONFIG_UNIT_TESTS 00:07:45.163 #undef SPDK_CONFIG_URING 00:07:45.163 #define SPDK_CONFIG_URING_PATH 00:07:45.163 #undef SPDK_CONFIG_URING_ZNS 00:07:45.163 #undef SPDK_CONFIG_USDT 00:07:45.163 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:45.163 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:45.163 #undef SPDK_CONFIG_VFIO_USER 00:07:45.163 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:45.163 #define SPDK_CONFIG_VHOST 1 00:07:45.163 #define SPDK_CONFIG_VIRTIO 1 00:07:45.163 #undef SPDK_CONFIG_VTUNE 00:07:45.163 #define SPDK_CONFIG_VTUNE_DIR 00:07:45.163 #define SPDK_CONFIG_WERROR 1 00:07:45.163 #define SPDK_CONFIG_WPDK_DIR 00:07:45.163 #undef SPDK_CONFIG_XNVME 00:07:45.163 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:45.163 12:35:17 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:45.163 12:35:17 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:45.163 12:35:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.163 12:35:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.163 12:35:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.163 12:35:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.163 12:35:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.163 12:35:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.163 12:35:17 -- paths/export.sh@5 -- # export PATH 00:07:45.163 12:35:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.163 12:35:17 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.163 12:35:17 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.163 12:35:17 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:45.163 12:35:17 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:45.163 12:35:17 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:45.163 12:35:17 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:45.163 12:35:17 -- pm/common@16 -- # TEST_TAG=N/A 00:07:45.163 12:35:17 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:45.163 12:35:17 -- common/autotest_common.sh@52 -- # : 1 00:07:45.163 12:35:17 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:45.163 12:35:17 -- common/autotest_common.sh@56 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:45.163 12:35:17 -- common/autotest_common.sh@58 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:45.163 12:35:17 -- common/autotest_common.sh@60 -- # : 1 00:07:45.163 12:35:17 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:45.163 12:35:17 -- common/autotest_common.sh@62 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:45.163 12:35:17 -- common/autotest_common.sh@64 -- # : 00:07:45.163 12:35:17 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:45.163 12:35:17 -- common/autotest_common.sh@66 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:45.163 12:35:17 -- common/autotest_common.sh@68 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:45.163 12:35:17 -- common/autotest_common.sh@70 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:45.163 12:35:17 -- common/autotest_common.sh@72 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:45.163 12:35:17 -- common/autotest_common.sh@74 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:45.163 12:35:17 -- common/autotest_common.sh@76 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:45.163 12:35:17 -- common/autotest_common.sh@78 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:45.163 12:35:17 -- common/autotest_common.sh@80 -- # : 1 00:07:45.163 12:35:17 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:45.163 12:35:17 -- common/autotest_common.sh@82 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:45.163 12:35:17 -- common/autotest_common.sh@84 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:45.163 12:35:17 -- common/autotest_common.sh@86 -- # : 1 00:07:45.163 12:35:17 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:45.163 12:35:17 -- common/autotest_common.sh@88 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:45.163 12:35:17 -- common/autotest_common.sh@90 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:45.163 12:35:17 -- common/autotest_common.sh@92 -- # : 0 00:07:45.163 12:35:17 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:45.163 12:35:17 -- common/autotest_common.sh@94 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:45.164 12:35:17 -- common/autotest_common.sh@96 -- # : rdma 00:07:45.164 12:35:17 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:45.164 12:35:17 -- common/autotest_common.sh@98 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:45.164 12:35:17 -- common/autotest_common.sh@100 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:45.164 12:35:17 -- common/autotest_common.sh@102 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:45.164 12:35:17 -- common/autotest_common.sh@104 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:45.164 12:35:17 -- common/autotest_common.sh@106 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:45.164 12:35:17 -- common/autotest_common.sh@108 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:45.164 12:35:17 -- common/autotest_common.sh@110 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:45.164 12:35:17 -- common/autotest_common.sh@112 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:45.164 12:35:17 -- common/autotest_common.sh@114 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:45.164 12:35:17 -- common/autotest_common.sh@116 -- # : 1 00:07:45.164 12:35:17 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:45.164 12:35:17 -- common/autotest_common.sh@118 -- # : 00:07:45.164 12:35:17 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:45.164 12:35:17 -- common/autotest_common.sh@120 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:45.164 12:35:17 -- common/autotest_common.sh@122 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:45.164 12:35:17 -- common/autotest_common.sh@124 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:45.164 12:35:17 -- common/autotest_common.sh@126 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:45.164 12:35:17 -- common/autotest_common.sh@128 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:45.164 12:35:17 -- common/autotest_common.sh@130 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:45.164 12:35:17 -- common/autotest_common.sh@132 -- # : 00:07:45.164 12:35:17 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:45.164 12:35:17 -- common/autotest_common.sh@134 -- # : true 00:07:45.164 12:35:17 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:45.164 12:35:17 -- common/autotest_common.sh@136 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:45.164 12:35:17 -- common/autotest_common.sh@138 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:45.164 12:35:17 -- common/autotest_common.sh@140 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:45.164 12:35:17 -- common/autotest_common.sh@142 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:45.164 12:35:17 -- common/autotest_common.sh@144 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:45.164 12:35:17 -- common/autotest_common.sh@146 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:45.164 12:35:17 -- common/autotest_common.sh@148 -- # : mlx5 00:07:45.164 12:35:17 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:45.164 12:35:17 -- common/autotest_common.sh@150 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:45.164 12:35:17 -- common/autotest_common.sh@152 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:45.164 12:35:17 -- common/autotest_common.sh@154 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:45.164 12:35:17 -- common/autotest_common.sh@156 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:45.164 12:35:17 -- common/autotest_common.sh@158 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:45.164 12:35:17 -- common/autotest_common.sh@160 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:45.164 12:35:17 -- common/autotest_common.sh@163 -- # : 00:07:45.164 12:35:17 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:45.164 12:35:17 -- common/autotest_common.sh@165 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:45.164 12:35:17 -- common/autotest_common.sh@167 -- # : 0 00:07:45.164 12:35:17 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:45.164 12:35:17 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:45.164 12:35:17 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:45.164 12:35:17 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:45.164 12:35:17 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:45.164 12:35:17 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.164 12:35:17 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.164 12:35:17 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.164 12:35:17 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.164 12:35:17 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.164 12:35:17 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.164 12:35:17 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:45.164 12:35:17 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:45.164 12:35:17 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:45.164 12:35:17 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:45.164 12:35:17 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.164 12:35:17 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.164 12:35:17 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.164 12:35:17 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.164 12:35:17 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:45.164 12:35:17 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:45.164 12:35:17 -- common/autotest_common.sh@196 -- # cat 00:07:45.164 12:35:17 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:45.164 12:35:17 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.164 12:35:17 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.164 12:35:17 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.164 12:35:17 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.164 12:35:17 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:45.164 12:35:17 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:45.164 12:35:17 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:45.164 12:35:17 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:45.164 12:35:17 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:45.164 12:35:17 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:45.164 12:35:17 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.164 12:35:17 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.164 12:35:17 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.164 12:35:17 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.164 12:35:17 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.164 12:35:17 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.164 12:35:17 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.164 12:35:17 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.164 12:35:17 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:45.164 12:35:17 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:45.165 12:35:17 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:45.165 12:35:17 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:45.165 12:35:17 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:45.165 12:35:17 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:45.165 12:35:17 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:45.165 12:35:17 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:45.165 12:35:17 -- common/autotest_common.sh@259 -- # valgrind= 00:07:45.165 12:35:17 -- common/autotest_common.sh@265 -- # uname -s 00:07:45.165 12:35:17 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:45.165 12:35:17 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:45.165 12:35:17 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:45.165 12:35:17 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:45.165 12:35:17 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:45.165 12:35:17 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j144 00:07:45.165 12:35:17 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:45.165 12:35:17 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:45.165 12:35:17 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:45.165 12:35:17 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:45.165 12:35:17 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:45.165 12:35:17 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:45.165 12:35:17 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:45.165 12:35:17 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=rdma 00:07:45.165 12:35:17 -- common/autotest_common.sh@319 -- # [[ -z 345407 ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@319 -- # kill -0 345407 00:07:45.165 12:35:17 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:45.165 12:35:17 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:45.165 12:35:17 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:45.165 12:35:17 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:45.165 12:35:17 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:45.165 12:35:17 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:45.165 12:35:17 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:45.165 12:35:17 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.cwVIND 00:07:45.165 12:35:17 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:45.165 12:35:17 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.cwVIND/tests/target /tmp/spdk.cwVIND 00:07:45.165 12:35:17 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:45.165 12:35:17 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:45.165 12:35:17 -- common/autotest_common.sh@328 -- # df -T 00:07:45.165 12:35:17 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:07:45.165 12:35:17 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:45.165 12:35:17 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:07:45.165 12:35:17 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:07:45.165 12:35:17 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # avails["$mount"]=123804364800 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # sizes["$mount"]=129356541952 00:07:45.165 12:35:17 -- common/autotest_common.sh@364 -- # uses["$mount"]=5552177152 00:07:45.165 12:35:17 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # avails["$mount"]=64677011456 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # sizes["$mount"]=64678268928 00:07:45.165 12:35:17 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:45.165 12:35:17 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # avails["$mount"]=25861578752 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # sizes["$mount"]=25871310848 00:07:45.165 12:35:17 -- common/autotest_common.sh@364 -- # uses["$mount"]=9732096 00:07:45.165 12:35:17 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # mounts["$mount"]=efivarfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # fss["$mount"]=efivarfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # avails["$mount"]=387072 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # sizes["$mount"]=507904 00:07:45.165 12:35:17 -- common/autotest_common.sh@364 -- # uses["$mount"]=116736 00:07:45.165 12:35:17 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # avails["$mount"]=64678100992 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # sizes["$mount"]=64678273024 00:07:45.165 12:35:17 -- common/autotest_common.sh@364 -- # uses["$mount"]=172032 00:07:45.165 12:35:17 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # avails["$mount"]=12935639040 00:07:45.165 12:35:17 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12935651328 00:07:45.165 12:35:17 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:45.165 12:35:17 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:45.165 12:35:17 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:45.165 * Looking for test storage... 00:07:45.165 12:35:17 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:45.165 12:35:17 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:45.165 12:35:17 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.165 12:35:17 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:45.165 12:35:17 -- common/autotest_common.sh@373 -- # mount=/ 00:07:45.165 12:35:17 -- common/autotest_common.sh@375 -- # target_space=123804364800 00:07:45.165 12:35:17 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:45.165 12:35:17 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:45.165 12:35:17 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@382 -- # new_size=7766769664 00:07:45.165 12:35:17 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:45.165 12:35:17 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.165 12:35:17 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.165 12:35:17 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.165 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.165 12:35:17 -- common/autotest_common.sh@390 -- # return 0 00:07:45.165 12:35:17 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:45.165 12:35:17 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:45.165 12:35:17 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:45.165 12:35:17 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:45.165 12:35:17 -- common/autotest_common.sh@1682 -- # true 00:07:45.165 12:35:17 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:45.165 12:35:17 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@27 -- # exec 00:07:45.165 12:35:17 -- common/autotest_common.sh@29 -- # exec 00:07:45.165 12:35:17 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:45.165 12:35:17 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:45.165 12:35:17 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:45.165 12:35:17 -- common/autotest_common.sh@18 -- # set -x 00:07:45.165 12:35:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:45.165 12:35:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:45.165 12:35:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:45.165 12:35:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:45.165 12:35:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:45.165 12:35:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:45.165 12:35:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:45.165 12:35:17 -- scripts/common.sh@335 -- # IFS=.-: 00:07:45.165 12:35:17 -- scripts/common.sh@335 -- # read -ra ver1 00:07:45.165 12:35:17 -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.165 12:35:17 -- scripts/common.sh@336 -- # read -ra ver2 00:07:45.165 12:35:17 -- scripts/common.sh@337 -- # local 'op=<' 00:07:45.165 12:35:17 -- scripts/common.sh@339 -- # ver1_l=2 00:07:45.166 12:35:17 -- scripts/common.sh@340 -- # ver2_l=1 00:07:45.166 12:35:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:45.166 12:35:17 -- scripts/common.sh@343 -- # case "$op" in 00:07:45.166 12:35:17 -- scripts/common.sh@344 -- # : 1 00:07:45.166 12:35:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:45.166 12:35:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.166 12:35:17 -- scripts/common.sh@364 -- # decimal 1 00:07:45.166 12:35:17 -- scripts/common.sh@352 -- # local d=1 00:07:45.166 12:35:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.166 12:35:17 -- scripts/common.sh@354 -- # echo 1 00:07:45.166 12:35:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:45.166 12:35:17 -- scripts/common.sh@365 -- # decimal 2 00:07:45.166 12:35:17 -- scripts/common.sh@352 -- # local d=2 00:07:45.166 12:35:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.166 12:35:17 -- scripts/common.sh@354 -- # echo 2 00:07:45.166 12:35:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:45.166 12:35:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:45.166 12:35:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:45.166 12:35:17 -- scripts/common.sh@367 -- # return 0 00:07:45.166 12:35:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.166 12:35:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:45.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.166 --rc genhtml_branch_coverage=1 00:07:45.166 --rc genhtml_function_coverage=1 00:07:45.166 --rc genhtml_legend=1 00:07:45.166 --rc geninfo_all_blocks=1 00:07:45.166 --rc geninfo_unexecuted_blocks=1 00:07:45.166 00:07:45.166 ' 00:07:45.166 12:35:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:45.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.166 --rc genhtml_branch_coverage=1 00:07:45.166 --rc genhtml_function_coverage=1 00:07:45.166 --rc genhtml_legend=1 00:07:45.166 --rc geninfo_all_blocks=1 00:07:45.166 --rc geninfo_unexecuted_blocks=1 00:07:45.166 00:07:45.166 ' 00:07:45.166 12:35:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:45.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.166 --rc genhtml_branch_coverage=1 00:07:45.166 --rc genhtml_function_coverage=1 00:07:45.166 --rc genhtml_legend=1 00:07:45.166 --rc geninfo_all_blocks=1 00:07:45.166 --rc geninfo_unexecuted_blocks=1 00:07:45.166 00:07:45.166 ' 00:07:45.166 12:35:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:45.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.166 --rc genhtml_branch_coverage=1 00:07:45.166 --rc genhtml_function_coverage=1 00:07:45.166 --rc genhtml_legend=1 00:07:45.166 --rc geninfo_all_blocks=1 00:07:45.166 --rc geninfo_unexecuted_blocks=1 00:07:45.166 00:07:45.166 ' 00:07:45.166 12:35:17 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.166 12:35:17 -- nvmf/common.sh@7 -- # uname -s 00:07:45.166 12:35:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.166 12:35:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.166 12:35:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.166 12:35:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.166 12:35:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.166 12:35:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.166 12:35:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.166 12:35:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.166 12:35:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.166 12:35:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.166 12:35:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:45.166 12:35:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:45.166 12:35:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.166 12:35:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.166 12:35:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.166 12:35:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:45.166 12:35:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.166 12:35:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.166 12:35:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.166 12:35:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.166 12:35:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.166 12:35:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.166 12:35:17 -- paths/export.sh@5 -- # export PATH 00:07:45.166 12:35:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.166 12:35:17 -- nvmf/common.sh@46 -- # : 0 00:07:45.166 12:35:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:45.166 12:35:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:45.166 12:35:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:45.166 12:35:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.166 12:35:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.166 12:35:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:45.166 12:35:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:45.166 12:35:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:45.166 12:35:17 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:45.166 12:35:17 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:45.166 12:35:17 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:45.166 12:35:17 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:45.166 12:35:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.166 12:35:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:45.166 12:35:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:45.166 12:35:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:45.166 12:35:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.166 12:35:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.166 12:35:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.166 12:35:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:45.166 12:35:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:45.166 12:35:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:45.166 12:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:51.760 12:35:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:51.760 12:35:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:51.760 12:35:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:51.760 12:35:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:51.760 12:35:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:51.760 12:35:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:51.760 12:35:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:51.760 12:35:24 -- nvmf/common.sh@294 -- # net_devs=() 00:07:51.760 12:35:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:51.760 12:35:24 -- nvmf/common.sh@295 -- # e810=() 00:07:51.760 12:35:24 -- nvmf/common.sh@295 -- # local -ga e810 00:07:51.760 12:35:24 -- nvmf/common.sh@296 -- # x722=() 00:07:51.760 12:35:24 -- nvmf/common.sh@296 -- # local -ga x722 00:07:51.760 12:35:24 -- nvmf/common.sh@297 -- # mlx=() 00:07:51.760 12:35:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:51.760 12:35:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.760 12:35:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:51.760 12:35:24 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:51.760 12:35:24 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:51.760 12:35:24 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:51.760 12:35:24 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:07:51.760 12:35:24 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:07:51.760 12:35:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:51.760 12:35:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:51.760 12:35:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:07:51.761 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:07:51.761 12:35:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:51.761 12:35:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:51.761 12:35:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:07:51.761 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:07:51.761 12:35:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:51.761 12:35:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:51.761 12:35:24 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:51.761 12:35:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.761 12:35:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:51.761 12:35:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.761 12:35:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:07:51.761 Found net devices under 0000:98:00.0: mlx_0_0 00:07:51.761 12:35:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.761 12:35:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:51.761 12:35:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.761 12:35:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:51.761 12:35:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.761 12:35:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:07:51.761 Found net devices under 0000:98:00.1: mlx_0_1 00:07:51.761 12:35:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.761 12:35:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:51.761 12:35:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:51.761 12:35:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:51.761 12:35:24 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:51.761 12:35:24 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:51.761 12:35:24 -- nvmf/common.sh@57 -- # uname 00:07:51.761 12:35:24 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:51.761 12:35:24 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:51.761 12:35:24 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:51.761 12:35:24 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:52.022 12:35:24 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:52.022 12:35:24 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:52.022 12:35:24 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:52.022 12:35:24 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:52.022 12:35:24 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:52.022 12:35:24 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:52.022 12:35:24 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:52.022 12:35:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:52.022 12:35:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:52.022 12:35:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:52.022 12:35:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:52.022 12:35:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:52.022 12:35:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:52.022 12:35:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.022 12:35:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:52.022 12:35:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:52.022 12:35:24 -- nvmf/common.sh@104 -- # continue 2 00:07:52.022 12:35:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:52.022 12:35:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.022 12:35:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:52.022 12:35:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.022 12:35:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:52.022 12:35:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:52.022 12:35:24 -- nvmf/common.sh@104 -- # continue 2 00:07:52.022 12:35:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:52.022 12:35:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:07:52.022 12:35:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:52.022 12:35:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:52.022 12:35:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:52.022 12:35:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:52.022 12:35:24 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:52.022 12:35:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:52.022 12:35:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:07:52.022 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:52.022 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:07:52.022 altname enp152s0f0np0 00:07:52.022 altname ens817f0np0 00:07:52.022 inet 192.168.100.8/24 scope global mlx_0_0 00:07:52.022 valid_lft forever preferred_lft forever 00:07:52.022 12:35:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:52.022 12:35:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:07:52.022 12:35:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:52.022 12:35:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:52.022 12:35:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:52.022 12:35:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:52.022 12:35:24 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:52.022 12:35:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:52.022 12:35:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:07:52.022 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:52.022 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:07:52.022 altname enp152s0f1np1 00:07:52.022 altname ens817f1np1 00:07:52.022 inet 192.168.100.9/24 scope global mlx_0_1 00:07:52.022 valid_lft forever preferred_lft forever 00:07:52.022 12:35:24 -- nvmf/common.sh@410 -- # return 0 00:07:52.022 12:35:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:52.022 12:35:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:52.022 12:35:24 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:52.022 12:35:24 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:52.023 12:35:24 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:52.023 12:35:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:52.023 12:35:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:52.023 12:35:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:52.023 12:35:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:52.023 12:35:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:52.023 12:35:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:52.023 12:35:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.023 12:35:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:52.023 12:35:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:52.023 12:35:25 -- nvmf/common.sh@104 -- # continue 2 00:07:52.023 12:35:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:52.023 12:35:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.023 12:35:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:52.023 12:35:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.023 12:35:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:52.023 12:35:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:52.023 12:35:25 -- nvmf/common.sh@104 -- # continue 2 00:07:52.023 12:35:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:52.023 12:35:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:07:52.023 12:35:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:52.023 12:35:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:52.023 12:35:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:52.023 12:35:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:52.023 12:35:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:52.023 12:35:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:07:52.023 12:35:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:52.023 12:35:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:52.023 12:35:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:52.023 12:35:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:52.023 12:35:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:52.023 192.168.100.9' 00:07:52.023 12:35:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:52.023 192.168.100.9' 00:07:52.023 12:35:25 -- nvmf/common.sh@445 -- # head -n 1 00:07:52.023 12:35:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:52.023 12:35:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:52.023 192.168.100.9' 00:07:52.023 12:35:25 -- nvmf/common.sh@446 -- # tail -n +2 00:07:52.023 12:35:25 -- nvmf/common.sh@446 -- # head -n 1 00:07:52.023 12:35:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:52.023 12:35:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:52.023 12:35:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:52.023 12:35:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:52.023 12:35:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:52.023 12:35:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:52.023 12:35:25 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:52.023 12:35:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:52.023 12:35:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.023 12:35:25 -- common/autotest_common.sh@10 -- # set +x 00:07:52.023 ************************************ 00:07:52.023 START TEST nvmf_filesystem_no_in_capsule 00:07:52.023 ************************************ 00:07:52.023 12:35:25 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:52.023 12:35:25 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:52.023 12:35:25 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:52.023 12:35:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:52.023 12:35:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.023 12:35:25 -- common/autotest_common.sh@10 -- # set +x 00:07:52.023 12:35:25 -- nvmf/common.sh@469 -- # nvmfpid=349390 00:07:52.023 12:35:25 -- nvmf/common.sh@470 -- # waitforlisten 349390 00:07:52.023 12:35:25 -- common/autotest_common.sh@829 -- # '[' -z 349390 ']' 00:07:52.023 12:35:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.023 12:35:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.023 12:35:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.023 12:35:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.023 12:35:25 -- common/autotest_common.sh@10 -- # set +x 00:07:52.023 12:35:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.023 [2024-11-20 12:35:25.124636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:52.023 [2024-11-20 12:35:25.124691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.283 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.283 [2024-11-20 12:35:25.186015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.283 [2024-11-20 12:35:25.253492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:52.283 [2024-11-20 12:35:25.253608] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.283 [2024-11-20 12:35:25.253616] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.284 [2024-11-20 12:35:25.253623] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.284 [2024-11-20 12:35:25.253759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.284 [2024-11-20 12:35:25.253870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.284 [2024-11-20 12:35:25.254027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.284 [2024-11-20 12:35:25.254028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.854 12:35:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.854 12:35:25 -- common/autotest_common.sh@862 -- # return 0 00:07:52.854 12:35:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:52.854 12:35:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.854 12:35:25 -- common/autotest_common.sh@10 -- # set +x 00:07:52.854 12:35:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.854 12:35:25 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:52.854 12:35:25 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:52.854 12:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.854 12:35:25 -- common/autotest_common.sh@10 -- # set +x 00:07:52.854 [2024-11-20 12:35:25.957226] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:53.115 [2024-11-20 12:35:25.988536] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6ce7f0/0x6d2ce0) succeed. 00:07:53.115 [2024-11-20 12:35:26.003312] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6cfde0/0x714380) succeed. 00:07:53.115 12:35:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.115 12:35:26 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:53.115 12:35:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.115 12:35:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.115 Malloc1 00:07:53.115 12:35:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.115 12:35:26 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.115 12:35:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.115 12:35:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.115 12:35:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.115 12:35:26 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:53.115 12:35:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.115 12:35:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.376 12:35:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.376 12:35:26 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:53.376 12:35:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.376 12:35:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.376 [2024-11-20 12:35:26.244556] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:53.376 12:35:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.376 12:35:26 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:53.376 12:35:26 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:53.376 12:35:26 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:53.376 12:35:26 -- common/autotest_common.sh@1369 -- # local bs 00:07:53.376 12:35:26 -- common/autotest_common.sh@1370 -- # local nb 00:07:53.376 12:35:26 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:53.376 12:35:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.376 12:35:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.376 12:35:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.376 12:35:26 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:53.376 { 00:07:53.376 "name": "Malloc1", 00:07:53.376 "aliases": [ 00:07:53.376 "151b9d33-c78e-424b-a720-eb3ce44054d2" 00:07:53.376 ], 00:07:53.376 "product_name": "Malloc disk", 00:07:53.376 "block_size": 512, 00:07:53.376 "num_blocks": 1048576, 00:07:53.376 "uuid": "151b9d33-c78e-424b-a720-eb3ce44054d2", 00:07:53.376 "assigned_rate_limits": { 00:07:53.376 "rw_ios_per_sec": 0, 00:07:53.376 "rw_mbytes_per_sec": 0, 00:07:53.376 "r_mbytes_per_sec": 0, 00:07:53.376 "w_mbytes_per_sec": 0 00:07:53.376 }, 00:07:53.376 "claimed": true, 00:07:53.376 "claim_type": "exclusive_write", 00:07:53.376 "zoned": false, 00:07:53.376 "supported_io_types": { 00:07:53.376 "read": true, 00:07:53.376 "write": true, 00:07:53.376 "unmap": true, 00:07:53.376 "write_zeroes": true, 00:07:53.376 "flush": true, 00:07:53.376 "reset": true, 00:07:53.376 "compare": false, 00:07:53.376 "compare_and_write": false, 00:07:53.376 "abort": true, 00:07:53.376 "nvme_admin": false, 00:07:53.376 "nvme_io": false 00:07:53.376 }, 00:07:53.376 "memory_domains": [ 00:07:53.376 { 00:07:53.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.376 "dma_device_type": 2 00:07:53.376 } 00:07:53.376 ], 00:07:53.376 "driver_specific": {} 00:07:53.376 } 00:07:53.376 ]' 00:07:53.376 12:35:26 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:53.376 12:35:26 -- common/autotest_common.sh@1372 -- # bs=512 00:07:53.376 12:35:26 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:53.376 12:35:26 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:53.376 12:35:26 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:53.376 12:35:26 -- common/autotest_common.sh@1377 -- # echo 512 00:07:53.376 12:35:26 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:53.376 12:35:26 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:54.763 12:35:27 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:54.763 12:35:27 -- common/autotest_common.sh@1187 -- # local i=0 00:07:54.763 12:35:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:54.763 12:35:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:54.763 12:35:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:57.310 12:35:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:57.310 12:35:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:57.310 12:35:29 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.310 12:35:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:57.310 12:35:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.310 12:35:29 -- common/autotest_common.sh@1197 -- # return 0 00:07:57.310 12:35:29 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:57.310 12:35:29 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:57.310 12:35:29 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:57.310 12:35:29 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:57.310 12:35:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:57.310 12:35:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:57.310 12:35:29 -- setup/common.sh@80 -- # echo 536870912 00:07:57.310 12:35:29 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:57.310 12:35:29 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:57.310 12:35:29 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:57.310 12:35:29 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:57.310 12:35:29 -- target/filesystem.sh@69 -- # partprobe 00:07:57.310 12:35:30 -- target/filesystem.sh@70 -- # sleep 1 00:07:58.254 12:35:31 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:58.254 12:35:31 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:58.254 12:35:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:58.254 12:35:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.254 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:07:58.254 ************************************ 00:07:58.254 START TEST filesystem_ext4 00:07:58.254 ************************************ 00:07:58.254 12:35:31 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:58.254 12:35:31 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:58.254 12:35:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.254 12:35:31 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:58.254 12:35:31 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:58.254 12:35:31 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:58.254 12:35:31 -- common/autotest_common.sh@914 -- # local i=0 00:07:58.254 12:35:31 -- common/autotest_common.sh@915 -- # local force 00:07:58.254 12:35:31 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:58.254 12:35:31 -- common/autotest_common.sh@918 -- # force=-F 00:07:58.254 12:35:31 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:58.254 mke2fs 1.47.0 (5-Feb-2023) 00:07:58.254 Discarding device blocks: 0/522240 done 00:07:58.254 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:58.254 Filesystem UUID: 6c5a9796-7303-46aa-b340-4d570fffe83b 00:07:58.254 Superblock backups stored on blocks: 00:07:58.254 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:58.254 00:07:58.254 Allocating group tables: 0/64 done 00:07:58.254 Writing inode tables: 0/64 done 00:07:58.254 Creating journal (8192 blocks): done 00:07:58.254 Writing superblocks and filesystem accounting information: 0/64 done 00:07:58.254 00:07:58.254 12:35:31 -- common/autotest_common.sh@931 -- # return 0 00:07:58.254 12:35:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.254 12:35:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.254 12:35:31 -- target/filesystem.sh@25 -- # sync 00:07:58.254 12:35:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.254 12:35:31 -- target/filesystem.sh@27 -- # sync 00:07:58.254 12:35:31 -- target/filesystem.sh@29 -- # i=0 00:07:58.254 12:35:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.254 12:35:31 -- target/filesystem.sh@37 -- # kill -0 349390 00:07:58.254 12:35:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.254 12:35:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.254 12:35:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.254 12:35:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.254 00:07:58.254 real 0m0.194s 00:07:58.254 user 0m0.027s 00:07:58.254 sys 0m0.069s 00:07:58.254 12:35:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.254 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:07:58.254 ************************************ 00:07:58.254 END TEST filesystem_ext4 00:07:58.254 ************************************ 00:07:58.254 12:35:31 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:58.254 12:35:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:58.254 12:35:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.254 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:07:58.254 ************************************ 00:07:58.254 START TEST filesystem_btrfs 00:07:58.254 ************************************ 00:07:58.254 12:35:31 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:58.254 12:35:31 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:58.254 12:35:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.254 12:35:31 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:58.254 12:35:31 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:58.254 12:35:31 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:58.254 12:35:31 -- common/autotest_common.sh@914 -- # local i=0 00:07:58.254 12:35:31 -- common/autotest_common.sh@915 -- # local force 00:07:58.254 12:35:31 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:58.254 12:35:31 -- common/autotest_common.sh@920 -- # force=-f 00:07:58.254 12:35:31 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:58.516 btrfs-progs v6.8.1 00:07:58.516 See https://btrfs.readthedocs.io for more information. 00:07:58.516 00:07:58.516 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:58.516 NOTE: several default settings have changed in version 5.15, please make sure 00:07:58.516 this does not affect your deployments: 00:07:58.516 - DUP for metadata (-m dup) 00:07:58.516 - enabled no-holes (-O no-holes) 00:07:58.516 - enabled free-space-tree (-R free-space-tree) 00:07:58.516 00:07:58.516 Label: (null) 00:07:58.516 UUID: 9acd4ee8-7a98-4822-b651-aeca7e3196d3 00:07:58.516 Node size: 16384 00:07:58.516 Sector size: 4096 (CPU page size: 4096) 00:07:58.516 Filesystem size: 510.00MiB 00:07:58.516 Block group profiles: 00:07:58.516 Data: single 8.00MiB 00:07:58.516 Metadata: DUP 32.00MiB 00:07:58.516 System: DUP 8.00MiB 00:07:58.516 SSD detected: yes 00:07:58.516 Zoned device: no 00:07:58.516 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:58.516 Checksum: crc32c 00:07:58.516 Number of devices: 1 00:07:58.516 Devices: 00:07:58.516 ID SIZE PATH 00:07:58.516 1 510.00MiB /dev/nvme0n1p1 00:07:58.516 00:07:58.516 12:35:31 -- common/autotest_common.sh@931 -- # return 0 00:07:58.516 12:35:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.516 12:35:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.516 12:35:31 -- target/filesystem.sh@25 -- # sync 00:07:58.516 12:35:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.516 12:35:31 -- target/filesystem.sh@27 -- # sync 00:07:58.516 12:35:31 -- target/filesystem.sh@29 -- # i=0 00:07:58.516 12:35:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.516 12:35:31 -- target/filesystem.sh@37 -- # kill -0 349390 00:07:58.516 12:35:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.516 12:35:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.516 12:35:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.516 12:35:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.516 00:07:58.516 real 0m0.270s 00:07:58.516 user 0m0.020s 00:07:58.516 sys 0m0.178s 00:07:58.516 12:35:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.516 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:07:58.516 ************************************ 00:07:58.516 END TEST filesystem_btrfs 00:07:58.516 ************************************ 00:07:58.516 12:35:31 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:58.516 12:35:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:58.516 12:35:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.517 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:07:58.517 ************************************ 00:07:58.517 START TEST filesystem_xfs 00:07:58.517 ************************************ 00:07:58.517 12:35:31 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:58.517 12:35:31 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:58.517 12:35:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.517 12:35:31 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:58.517 12:35:31 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:58.517 12:35:31 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:58.517 12:35:31 -- common/autotest_common.sh@914 -- # local i=0 00:07:58.517 12:35:31 -- common/autotest_common.sh@915 -- # local force 00:07:58.517 12:35:31 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:58.517 12:35:31 -- common/autotest_common.sh@920 -- # force=-f 00:07:58.517 12:35:31 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:58.778 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:58.778 = sectsz=512 attr=2, projid32bit=1 00:07:58.778 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:58.778 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:58.778 data = bsize=4096 blocks=130560, imaxpct=25 00:07:58.778 = sunit=0 swidth=0 blks 00:07:58.778 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:58.778 log =internal log bsize=4096 blocks=16384, version=2 00:07:58.778 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:58.778 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:58.778 Discarding blocks...Done. 00:07:58.778 12:35:31 -- common/autotest_common.sh@931 -- # return 0 00:07:58.778 12:35:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.722 12:35:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.722 12:35:32 -- target/filesystem.sh@25 -- # sync 00:07:59.722 12:35:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.722 12:35:32 -- target/filesystem.sh@27 -- # sync 00:07:59.722 12:35:32 -- target/filesystem.sh@29 -- # i=0 00:07:59.722 12:35:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.983 12:35:32 -- target/filesystem.sh@37 -- # kill -0 349390 00:07:59.983 12:35:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.983 12:35:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.983 12:35:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.983 12:35:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.983 00:07:59.983 real 0m1.281s 00:07:59.983 user 0m0.026s 00:07:59.983 sys 0m0.099s 00:07:59.983 12:35:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.983 12:35:32 -- common/autotest_common.sh@10 -- # set +x 00:07:59.983 ************************************ 00:07:59.983 END TEST filesystem_xfs 00:07:59.983 ************************************ 00:07:59.983 12:35:32 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:59.983 12:35:32 -- target/filesystem.sh@93 -- # sync 00:07:59.983 12:35:32 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:01.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.370 12:35:34 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:01.370 12:35:34 -- common/autotest_common.sh@1208 -- # local i=0 00:08:01.370 12:35:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:01.370 12:35:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.370 12:35:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:01.370 12:35:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.370 12:35:34 -- common/autotest_common.sh@1220 -- # return 0 00:08:01.370 12:35:34 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.370 12:35:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.370 12:35:34 -- common/autotest_common.sh@10 -- # set +x 00:08:01.370 12:35:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.370 12:35:34 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:01.370 12:35:34 -- target/filesystem.sh@101 -- # killprocess 349390 00:08:01.370 12:35:34 -- common/autotest_common.sh@936 -- # '[' -z 349390 ']' 00:08:01.370 12:35:34 -- common/autotest_common.sh@940 -- # kill -0 349390 00:08:01.370 12:35:34 -- common/autotest_common.sh@941 -- # uname 00:08:01.370 12:35:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:01.370 12:35:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 349390 00:08:01.370 12:35:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:01.370 12:35:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:01.370 12:35:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 349390' 00:08:01.370 killing process with pid 349390 00:08:01.370 12:35:34 -- common/autotest_common.sh@955 -- # kill 349390 00:08:01.370 12:35:34 -- common/autotest_common.sh@960 -- # wait 349390 00:08:01.632 12:35:34 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:01.632 00:08:01.632 real 0m9.563s 00:08:01.632 user 0m37.522s 00:08:01.632 sys 0m1.147s 00:08:01.632 12:35:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.632 12:35:34 -- common/autotest_common.sh@10 -- # set +x 00:08:01.632 ************************************ 00:08:01.632 END TEST nvmf_filesystem_no_in_capsule 00:08:01.632 ************************************ 00:08:01.632 12:35:34 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:01.632 12:35:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:01.632 12:35:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.632 12:35:34 -- common/autotest_common.sh@10 -- # set +x 00:08:01.632 ************************************ 00:08:01.632 START TEST nvmf_filesystem_in_capsule 00:08:01.632 ************************************ 00:08:01.632 12:35:34 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:01.632 12:35:34 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:01.632 12:35:34 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:01.632 12:35:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:01.632 12:35:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.632 12:35:34 -- common/autotest_common.sh@10 -- # set +x 00:08:01.632 12:35:34 -- nvmf/common.sh@469 -- # nvmfpid=351445 00:08:01.632 12:35:34 -- nvmf/common.sh@470 -- # waitforlisten 351445 00:08:01.632 12:35:34 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.632 12:35:34 -- common/autotest_common.sh@829 -- # '[' -z 351445 ']' 00:08:01.632 12:35:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.632 12:35:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.632 12:35:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.632 12:35:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.632 12:35:34 -- common/autotest_common.sh@10 -- # set +x 00:08:01.893 [2024-11-20 12:35:34.741113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.893 [2024-11-20 12:35:34.741171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.893 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.893 [2024-11-20 12:35:34.804339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.893 [2024-11-20 12:35:34.871414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.893 [2024-11-20 12:35:34.871543] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.893 [2024-11-20 12:35:34.871554] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.893 [2024-11-20 12:35:34.871562] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.893 [2024-11-20 12:35:34.871699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.893 [2024-11-20 12:35:34.871801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.893 [2024-11-20 12:35:34.871955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.894 [2024-11-20 12:35:34.871957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.465 12:35:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.465 12:35:35 -- common/autotest_common.sh@862 -- # return 0 00:08:02.465 12:35:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:02.465 12:35:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.465 12:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:02.465 12:35:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.465 12:35:35 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:02.465 12:35:35 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:02.727 12:35:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.727 12:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:02.727 [2024-11-20 12:35:35.609504] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdc47f0/0xdc8ce0) succeed. 00:08:02.727 [2024-11-20 12:35:35.624170] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdc5de0/0xe0a380) succeed. 00:08:02.727 12:35:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.727 12:35:35 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:02.727 12:35:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.727 12:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:02.727 Malloc1 00:08:02.727 12:35:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.727 12:35:35 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:02.727 12:35:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.727 12:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:02.988 12:35:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.988 12:35:35 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:02.988 12:35:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.988 12:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:02.988 12:35:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.988 12:35:35 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:02.988 12:35:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.988 12:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:02.988 [2024-11-20 12:35:35.858305] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:02.988 12:35:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.988 12:35:35 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:02.988 12:35:35 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:02.988 12:35:35 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:02.988 12:35:35 -- common/autotest_common.sh@1369 -- # local bs 00:08:02.988 12:35:35 -- common/autotest_common.sh@1370 -- # local nb 00:08:02.988 12:35:35 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:02.988 12:35:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.988 12:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:02.988 12:35:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.988 12:35:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:02.988 { 00:08:02.988 "name": "Malloc1", 00:08:02.988 "aliases": [ 00:08:02.988 "51d41467-08bd-424d-981e-a742421fad80" 00:08:02.988 ], 00:08:02.988 "product_name": "Malloc disk", 00:08:02.988 "block_size": 512, 00:08:02.988 "num_blocks": 1048576, 00:08:02.988 "uuid": "51d41467-08bd-424d-981e-a742421fad80", 00:08:02.988 "assigned_rate_limits": { 00:08:02.988 "rw_ios_per_sec": 0, 00:08:02.988 "rw_mbytes_per_sec": 0, 00:08:02.988 "r_mbytes_per_sec": 0, 00:08:02.988 "w_mbytes_per_sec": 0 00:08:02.988 }, 00:08:02.988 "claimed": true, 00:08:02.988 "claim_type": "exclusive_write", 00:08:02.988 "zoned": false, 00:08:02.988 "supported_io_types": { 00:08:02.988 "read": true, 00:08:02.988 "write": true, 00:08:02.988 "unmap": true, 00:08:02.988 "write_zeroes": true, 00:08:02.988 "flush": true, 00:08:02.988 "reset": true, 00:08:02.988 "compare": false, 00:08:02.988 "compare_and_write": false, 00:08:02.988 "abort": true, 00:08:02.988 "nvme_admin": false, 00:08:02.988 "nvme_io": false 00:08:02.988 }, 00:08:02.988 "memory_domains": [ 00:08:02.988 { 00:08:02.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.988 "dma_device_type": 2 00:08:02.988 } 00:08:02.988 ], 00:08:02.988 "driver_specific": {} 00:08:02.988 } 00:08:02.988 ]' 00:08:02.988 12:35:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:02.988 12:35:35 -- common/autotest_common.sh@1372 -- # bs=512 00:08:02.988 12:35:35 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:02.988 12:35:35 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:02.988 12:35:35 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:02.988 12:35:35 -- common/autotest_common.sh@1377 -- # echo 512 00:08:02.988 12:35:35 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:02.988 12:35:35 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:04.376 12:35:37 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:04.376 12:35:37 -- common/autotest_common.sh@1187 -- # local i=0 00:08:04.376 12:35:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:04.376 12:35:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:04.376 12:35:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:06.290 12:35:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:06.290 12:35:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:06.290 12:35:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:06.290 12:35:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:06.290 12:35:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:06.290 12:35:39 -- common/autotest_common.sh@1197 -- # return 0 00:08:06.290 12:35:39 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:06.290 12:35:39 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:06.550 12:35:39 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:06.550 12:35:39 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:06.550 12:35:39 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:06.550 12:35:39 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:06.550 12:35:39 -- setup/common.sh@80 -- # echo 536870912 00:08:06.550 12:35:39 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:06.550 12:35:39 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:06.550 12:35:39 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:06.550 12:35:39 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:06.550 12:35:39 -- target/filesystem.sh@69 -- # partprobe 00:08:06.550 12:35:39 -- target/filesystem.sh@70 -- # sleep 1 00:08:07.492 12:35:40 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:07.492 12:35:40 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:07.492 12:35:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:07.492 12:35:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.492 12:35:40 -- common/autotest_common.sh@10 -- # set +x 00:08:07.492 ************************************ 00:08:07.492 START TEST filesystem_in_capsule_ext4 00:08:07.492 ************************************ 00:08:07.492 12:35:40 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:07.492 12:35:40 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:07.492 12:35:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.492 12:35:40 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:07.492 12:35:40 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:07.492 12:35:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:07.492 12:35:40 -- common/autotest_common.sh@914 -- # local i=0 00:08:07.492 12:35:40 -- common/autotest_common.sh@915 -- # local force 00:08:07.492 12:35:40 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:07.492 12:35:40 -- common/autotest_common.sh@918 -- # force=-F 00:08:07.492 12:35:40 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:07.492 mke2fs 1.47.0 (5-Feb-2023) 00:08:07.492 Discarding device blocks: 0/522240 done 00:08:07.492 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:07.492 Filesystem UUID: 7003e5c5-9176-4136-ba4c-727229dfd718 00:08:07.492 Superblock backups stored on blocks: 00:08:07.492 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:07.492 00:08:07.492 Allocating group tables: 0/64 done 00:08:07.492 Writing inode tables: 0/64 done 00:08:07.753 Creating journal (8192 blocks): done 00:08:07.753 Writing superblocks and filesystem accounting information: 0/64 done 00:08:07.753 00:08:07.753 12:35:40 -- common/autotest_common.sh@931 -- # return 0 00:08:07.753 12:35:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.753 12:35:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.753 12:35:40 -- target/filesystem.sh@25 -- # sync 00:08:07.753 12:35:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.753 12:35:40 -- target/filesystem.sh@27 -- # sync 00:08:07.753 12:35:40 -- target/filesystem.sh@29 -- # i=0 00:08:07.753 12:35:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.753 12:35:40 -- target/filesystem.sh@37 -- # kill -0 351445 00:08:07.753 12:35:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.753 12:35:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.753 12:35:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.753 12:35:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.753 00:08:07.753 real 0m0.185s 00:08:07.753 user 0m0.029s 00:08:07.753 sys 0m0.068s 00:08:07.753 12:35:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.753 12:35:40 -- common/autotest_common.sh@10 -- # set +x 00:08:07.753 ************************************ 00:08:07.753 END TEST filesystem_in_capsule_ext4 00:08:07.753 ************************************ 00:08:07.753 12:35:40 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:07.753 12:35:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:07.753 12:35:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.753 12:35:40 -- common/autotest_common.sh@10 -- # set +x 00:08:07.753 ************************************ 00:08:07.753 START TEST filesystem_in_capsule_btrfs 00:08:07.753 ************************************ 00:08:07.753 12:35:40 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:07.753 12:35:40 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:07.753 12:35:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.753 12:35:40 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:07.753 12:35:40 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:07.753 12:35:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:07.753 12:35:40 -- common/autotest_common.sh@914 -- # local i=0 00:08:07.753 12:35:40 -- common/autotest_common.sh@915 -- # local force 00:08:07.753 12:35:40 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:07.753 12:35:40 -- common/autotest_common.sh@920 -- # force=-f 00:08:07.753 12:35:40 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:07.753 btrfs-progs v6.8.1 00:08:07.753 See https://btrfs.readthedocs.io for more information. 00:08:07.753 00:08:07.753 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:07.753 NOTE: several default settings have changed in version 5.15, please make sure 00:08:07.753 this does not affect your deployments: 00:08:07.753 - DUP for metadata (-m dup) 00:08:07.753 - enabled no-holes (-O no-holes) 00:08:07.753 - enabled free-space-tree (-R free-space-tree) 00:08:07.753 00:08:07.753 Label: (null) 00:08:07.753 UUID: a064c716-e8cd-4efd-9d4f-bd4c6566d04c 00:08:07.753 Node size: 16384 00:08:07.753 Sector size: 4096 (CPU page size: 4096) 00:08:07.753 Filesystem size: 510.00MiB 00:08:07.753 Block group profiles: 00:08:07.753 Data: single 8.00MiB 00:08:07.753 Metadata: DUP 32.00MiB 00:08:07.753 System: DUP 8.00MiB 00:08:07.753 SSD detected: yes 00:08:07.753 Zoned device: no 00:08:07.753 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:07.753 Checksum: crc32c 00:08:07.753 Number of devices: 1 00:08:07.753 Devices: 00:08:07.753 ID SIZE PATH 00:08:07.753 1 510.00MiB /dev/nvme0n1p1 00:08:07.753 00:08:07.753 12:35:40 -- common/autotest_common.sh@931 -- # return 0 00:08:07.753 12:35:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.014 12:35:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.014 12:35:40 -- target/filesystem.sh@25 -- # sync 00:08:08.014 12:35:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.014 12:35:40 -- target/filesystem.sh@27 -- # sync 00:08:08.014 12:35:40 -- target/filesystem.sh@29 -- # i=0 00:08:08.014 12:35:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.014 12:35:40 -- target/filesystem.sh@37 -- # kill -0 351445 00:08:08.014 12:35:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.014 12:35:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.014 12:35:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.014 12:35:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.014 00:08:08.014 real 0m0.208s 00:08:08.014 user 0m0.024s 00:08:08.014 sys 0m0.118s 00:08:08.014 12:35:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.014 12:35:40 -- common/autotest_common.sh@10 -- # set +x 00:08:08.014 ************************************ 00:08:08.014 END TEST filesystem_in_capsule_btrfs 00:08:08.014 ************************************ 00:08:08.014 12:35:41 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:08.014 12:35:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:08.014 12:35:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.014 12:35:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.014 ************************************ 00:08:08.014 START TEST filesystem_in_capsule_xfs 00:08:08.014 ************************************ 00:08:08.014 12:35:41 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:08.014 12:35:41 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:08.014 12:35:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.014 12:35:41 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:08.014 12:35:41 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:08.014 12:35:41 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:08.014 12:35:41 -- common/autotest_common.sh@914 -- # local i=0 00:08:08.014 12:35:41 -- common/autotest_common.sh@915 -- # local force 00:08:08.014 12:35:41 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:08.014 12:35:41 -- common/autotest_common.sh@920 -- # force=-f 00:08:08.014 12:35:41 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:08.014 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:08.014 = sectsz=512 attr=2, projid32bit=1 00:08:08.014 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:08.014 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:08.014 data = bsize=4096 blocks=130560, imaxpct=25 00:08:08.014 = sunit=0 swidth=0 blks 00:08:08.014 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:08.014 log =internal log bsize=4096 blocks=16384, version=2 00:08:08.014 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:08.014 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:08.014 Discarding blocks...Done. 00:08:08.014 12:35:41 -- common/autotest_common.sh@931 -- # return 0 00:08:08.014 12:35:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.274 12:35:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.274 12:35:41 -- target/filesystem.sh@25 -- # sync 00:08:08.274 12:35:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.274 12:35:41 -- target/filesystem.sh@27 -- # sync 00:08:08.274 12:35:41 -- target/filesystem.sh@29 -- # i=0 00:08:08.274 12:35:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.274 12:35:41 -- target/filesystem.sh@37 -- # kill -0 351445 00:08:08.274 12:35:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.274 12:35:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.274 12:35:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.274 12:35:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.274 00:08:08.274 real 0m0.182s 00:08:08.274 user 0m0.021s 00:08:08.274 sys 0m0.077s 00:08:08.274 12:35:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.274 12:35:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.274 ************************************ 00:08:08.274 END TEST filesystem_in_capsule_xfs 00:08:08.274 ************************************ 00:08:08.274 12:35:41 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:08.274 12:35:41 -- target/filesystem.sh@93 -- # sync 00:08:08.274 12:35:41 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.662 12:35:42 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.662 12:35:42 -- common/autotest_common.sh@1208 -- # local i=0 00:08:09.662 12:35:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:09.662 12:35:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.662 12:35:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:09.662 12:35:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.662 12:35:42 -- common/autotest_common.sh@1220 -- # return 0 00:08:09.662 12:35:42 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.662 12:35:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.662 12:35:42 -- common/autotest_common.sh@10 -- # set +x 00:08:09.662 12:35:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.662 12:35:42 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:09.662 12:35:42 -- target/filesystem.sh@101 -- # killprocess 351445 00:08:09.662 12:35:42 -- common/autotest_common.sh@936 -- # '[' -z 351445 ']' 00:08:09.662 12:35:42 -- common/autotest_common.sh@940 -- # kill -0 351445 00:08:09.662 12:35:42 -- common/autotest_common.sh@941 -- # uname 00:08:09.662 12:35:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:09.662 12:35:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 351445 00:08:09.662 12:35:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:09.662 12:35:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:09.662 12:35:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 351445' 00:08:09.662 killing process with pid 351445 00:08:09.662 12:35:42 -- common/autotest_common.sh@955 -- # kill 351445 00:08:09.662 12:35:42 -- common/autotest_common.sh@960 -- # wait 351445 00:08:09.923 12:35:43 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:09.923 00:08:09.923 real 0m8.323s 00:08:09.923 user 0m32.534s 00:08:09.923 sys 0m1.065s 00:08:09.923 12:35:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.923 12:35:43 -- common/autotest_common.sh@10 -- # set +x 00:08:09.923 ************************************ 00:08:09.923 END TEST nvmf_filesystem_in_capsule 00:08:09.923 ************************************ 00:08:10.184 12:35:43 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:10.184 12:35:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:10.184 12:35:43 -- nvmf/common.sh@116 -- # sync 00:08:10.184 12:35:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:10.184 12:35:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:10.184 12:35:43 -- nvmf/common.sh@119 -- # set +e 00:08:10.184 12:35:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:10.184 12:35:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:10.184 rmmod nvme_rdma 00:08:10.184 rmmod nvme_fabrics 00:08:10.184 12:35:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:10.184 12:35:43 -- nvmf/common.sh@123 -- # set -e 00:08:10.184 12:35:43 -- nvmf/common.sh@124 -- # return 0 00:08:10.184 12:35:43 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:10.184 12:35:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:10.184 12:35:43 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:10.184 00:08:10.184 real 0m25.700s 00:08:10.184 user 1m12.403s 00:08:10.184 sys 0m7.794s 00:08:10.184 12:35:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.184 12:35:43 -- common/autotest_common.sh@10 -- # set +x 00:08:10.184 ************************************ 00:08:10.184 END TEST nvmf_filesystem 00:08:10.184 ************************************ 00:08:10.184 12:35:43 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:10.184 12:35:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:10.184 12:35:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.184 12:35:43 -- common/autotest_common.sh@10 -- # set +x 00:08:10.184 ************************************ 00:08:10.184 START TEST nvmf_discovery 00:08:10.184 ************************************ 00:08:10.184 12:35:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:10.184 * Looking for test storage... 00:08:10.184 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:10.184 12:35:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:10.184 12:35:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:10.184 12:35:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:10.447 12:35:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:10.447 12:35:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:10.447 12:35:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:10.447 12:35:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:10.447 12:35:43 -- scripts/common.sh@335 -- # IFS=.-: 00:08:10.447 12:35:43 -- scripts/common.sh@335 -- # read -ra ver1 00:08:10.447 12:35:43 -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.447 12:35:43 -- scripts/common.sh@336 -- # read -ra ver2 00:08:10.447 12:35:43 -- scripts/common.sh@337 -- # local 'op=<' 00:08:10.447 12:35:43 -- scripts/common.sh@339 -- # ver1_l=2 00:08:10.447 12:35:43 -- scripts/common.sh@340 -- # ver2_l=1 00:08:10.447 12:35:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:10.447 12:35:43 -- scripts/common.sh@343 -- # case "$op" in 00:08:10.447 12:35:43 -- scripts/common.sh@344 -- # : 1 00:08:10.447 12:35:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:10.447 12:35:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.447 12:35:43 -- scripts/common.sh@364 -- # decimal 1 00:08:10.447 12:35:43 -- scripts/common.sh@352 -- # local d=1 00:08:10.447 12:35:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.447 12:35:43 -- scripts/common.sh@354 -- # echo 1 00:08:10.447 12:35:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:10.447 12:35:43 -- scripts/common.sh@365 -- # decimal 2 00:08:10.447 12:35:43 -- scripts/common.sh@352 -- # local d=2 00:08:10.447 12:35:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.447 12:35:43 -- scripts/common.sh@354 -- # echo 2 00:08:10.447 12:35:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:10.447 12:35:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:10.447 12:35:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:10.447 12:35:43 -- scripts/common.sh@367 -- # return 0 00:08:10.447 12:35:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.447 12:35:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:10.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.447 --rc genhtml_branch_coverage=1 00:08:10.447 --rc genhtml_function_coverage=1 00:08:10.447 --rc genhtml_legend=1 00:08:10.447 --rc geninfo_all_blocks=1 00:08:10.447 --rc geninfo_unexecuted_blocks=1 00:08:10.447 00:08:10.447 ' 00:08:10.447 12:35:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:10.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.447 --rc genhtml_branch_coverage=1 00:08:10.447 --rc genhtml_function_coverage=1 00:08:10.447 --rc genhtml_legend=1 00:08:10.447 --rc geninfo_all_blocks=1 00:08:10.447 --rc geninfo_unexecuted_blocks=1 00:08:10.447 00:08:10.447 ' 00:08:10.447 12:35:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:10.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.447 --rc genhtml_branch_coverage=1 00:08:10.447 --rc genhtml_function_coverage=1 00:08:10.447 --rc genhtml_legend=1 00:08:10.447 --rc geninfo_all_blocks=1 00:08:10.447 --rc geninfo_unexecuted_blocks=1 00:08:10.447 00:08:10.447 ' 00:08:10.447 12:35:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:10.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.447 --rc genhtml_branch_coverage=1 00:08:10.447 --rc genhtml_function_coverage=1 00:08:10.447 --rc genhtml_legend=1 00:08:10.447 --rc geninfo_all_blocks=1 00:08:10.447 --rc geninfo_unexecuted_blocks=1 00:08:10.447 00:08:10.447 ' 00:08:10.447 12:35:43 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.447 12:35:43 -- nvmf/common.sh@7 -- # uname -s 00:08:10.447 12:35:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.447 12:35:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.447 12:35:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.447 12:35:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.447 12:35:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.447 12:35:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.447 12:35:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.447 12:35:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.447 12:35:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.447 12:35:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.448 12:35:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:10.448 12:35:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:10.448 12:35:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.448 12:35:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.448 12:35:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.448 12:35:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:10.448 12:35:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.448 12:35:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.448 12:35:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.448 12:35:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.448 12:35:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.448 12:35:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.448 12:35:43 -- paths/export.sh@5 -- # export PATH 00:08:10.448 12:35:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.448 12:35:43 -- nvmf/common.sh@46 -- # : 0 00:08:10.448 12:35:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:10.448 12:35:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:10.448 12:35:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:10.448 12:35:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.448 12:35:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.448 12:35:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:10.448 12:35:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:10.448 12:35:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:10.448 12:35:43 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:10.448 12:35:43 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:10.448 12:35:43 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:10.448 12:35:43 -- target/discovery.sh@15 -- # hash nvme 00:08:10.448 12:35:43 -- target/discovery.sh@20 -- # nvmftestinit 00:08:10.448 12:35:43 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:10.448 12:35:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.448 12:35:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:10.448 12:35:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:10.448 12:35:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:10.448 12:35:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.448 12:35:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.448 12:35:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.448 12:35:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:10.448 12:35:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:10.448 12:35:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:10.448 12:35:43 -- common/autotest_common.sh@10 -- # set +x 00:08:18.596 12:35:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:18.597 12:35:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:18.597 12:35:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:18.597 12:35:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:18.597 12:35:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:18.597 12:35:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:18.597 12:35:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:18.597 12:35:50 -- nvmf/common.sh@294 -- # net_devs=() 00:08:18.597 12:35:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:18.597 12:35:50 -- nvmf/common.sh@295 -- # e810=() 00:08:18.597 12:35:50 -- nvmf/common.sh@295 -- # local -ga e810 00:08:18.597 12:35:50 -- nvmf/common.sh@296 -- # x722=() 00:08:18.597 12:35:50 -- nvmf/common.sh@296 -- # local -ga x722 00:08:18.597 12:35:50 -- nvmf/common.sh@297 -- # mlx=() 00:08:18.597 12:35:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:18.597 12:35:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.597 12:35:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:18.597 12:35:50 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:18.597 12:35:50 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:18.597 12:35:50 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:18.597 12:35:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:18.597 12:35:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:18.597 12:35:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:18.597 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:18.597 12:35:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:18.597 12:35:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:18.597 12:35:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:18.597 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:18.597 12:35:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:18.597 12:35:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:18.597 12:35:50 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:18.597 12:35:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.597 12:35:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:18.597 12:35:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.597 12:35:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:18.597 Found net devices under 0000:98:00.0: mlx_0_0 00:08:18.597 12:35:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.597 12:35:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:18.597 12:35:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.597 12:35:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:18.597 12:35:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.597 12:35:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:18.597 Found net devices under 0000:98:00.1: mlx_0_1 00:08:18.597 12:35:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.597 12:35:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:18.597 12:35:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:18.597 12:35:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:18.597 12:35:50 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:18.597 12:35:50 -- nvmf/common.sh@57 -- # uname 00:08:18.597 12:35:50 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:18.597 12:35:50 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:18.597 12:35:50 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:18.597 12:35:50 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:18.597 12:35:50 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:18.597 12:35:50 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:18.597 12:35:50 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:18.597 12:35:50 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:18.597 12:35:50 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:18.597 12:35:50 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:18.597 12:35:50 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:18.597 12:35:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:18.597 12:35:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:18.597 12:35:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:18.597 12:35:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:18.597 12:35:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:18.597 12:35:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:18.597 12:35:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.597 12:35:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:18.597 12:35:50 -- nvmf/common.sh@104 -- # continue 2 00:08:18.597 12:35:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:18.597 12:35:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.597 12:35:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.597 12:35:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:18.597 12:35:50 -- nvmf/common.sh@104 -- # continue 2 00:08:18.597 12:35:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:18.597 12:35:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:18.597 12:35:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:18.597 12:35:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:18.597 12:35:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:18.597 12:35:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:18.597 12:35:50 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:18.597 12:35:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:18.597 12:35:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:18.597 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:18.597 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:08:18.597 altname enp152s0f0np0 00:08:18.597 altname ens817f0np0 00:08:18.597 inet 192.168.100.8/24 scope global mlx_0_0 00:08:18.597 valid_lft forever preferred_lft forever 00:08:18.597 12:35:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:18.597 12:35:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:18.597 12:35:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:18.598 12:35:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:18.598 12:35:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:18.598 12:35:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:18.598 12:35:50 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:18.598 12:35:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:18.598 12:35:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:18.598 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:18.598 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:08:18.598 altname enp152s0f1np1 00:08:18.598 altname ens817f1np1 00:08:18.598 inet 192.168.100.9/24 scope global mlx_0_1 00:08:18.598 valid_lft forever preferred_lft forever 00:08:18.598 12:35:50 -- nvmf/common.sh@410 -- # return 0 00:08:18.598 12:35:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:18.598 12:35:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:18.598 12:35:50 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:18.598 12:35:50 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:18.598 12:35:50 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:18.598 12:35:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:18.598 12:35:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:18.598 12:35:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:18.598 12:35:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:18.598 12:35:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:18.598 12:35:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:18.598 12:35:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.598 12:35:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:18.598 12:35:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:18.598 12:35:50 -- nvmf/common.sh@104 -- # continue 2 00:08:18.598 12:35:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:18.598 12:35:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.598 12:35:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:18.598 12:35:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.598 12:35:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:18.598 12:35:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:18.598 12:35:50 -- nvmf/common.sh@104 -- # continue 2 00:08:18.598 12:35:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:18.598 12:35:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:18.598 12:35:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:18.598 12:35:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:18.598 12:35:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:18.598 12:35:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:18.598 12:35:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:18.598 12:35:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:18.598 12:35:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:18.598 12:35:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:18.598 12:35:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:18.598 12:35:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:18.598 12:35:50 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:18.598 192.168.100.9' 00:08:18.598 12:35:50 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:18.598 192.168.100.9' 00:08:18.598 12:35:50 -- nvmf/common.sh@445 -- # head -n 1 00:08:18.598 12:35:50 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:18.598 12:35:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:18.598 192.168.100.9' 00:08:18.598 12:35:50 -- nvmf/common.sh@446 -- # tail -n +2 00:08:18.598 12:35:50 -- nvmf/common.sh@446 -- # head -n 1 00:08:18.598 12:35:50 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:18.598 12:35:50 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:18.598 12:35:50 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:18.598 12:35:50 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:18.598 12:35:50 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:18.598 12:35:50 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:18.598 12:35:50 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:18.598 12:35:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:18.598 12:35:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.598 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:08:18.598 12:35:50 -- nvmf/common.sh@469 -- # nvmfpid=357210 00:08:18.598 12:35:50 -- nvmf/common.sh@470 -- # waitforlisten 357210 00:08:18.598 12:35:50 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.598 12:35:50 -- common/autotest_common.sh@829 -- # '[' -z 357210 ']' 00:08:18.598 12:35:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.598 12:35:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.598 12:35:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.598 12:35:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.598 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:08:18.598 [2024-11-20 12:35:50.624227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.598 [2024-11-20 12:35:50.624294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.598 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.598 [2024-11-20 12:35:50.689077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.598 [2024-11-20 12:35:50.761243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:18.598 [2024-11-20 12:35:50.761380] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.598 [2024-11-20 12:35:50.761390] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.598 [2024-11-20 12:35:50.761398] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.598 [2024-11-20 12:35:50.761573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.598 [2024-11-20 12:35:50.761715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.598 [2024-11-20 12:35:50.761871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.598 [2024-11-20 12:35:50.761872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.598 12:35:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.598 12:35:51 -- common/autotest_common.sh@862 -- # return 0 00:08:18.598 12:35:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:18.598 12:35:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.598 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.598 12:35:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.598 12:35:51 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:18.598 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.598 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.598 [2024-11-20 12:35:51.501432] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x69f7f0/0x6a3ce0) succeed. 00:08:18.598 [2024-11-20 12:35:51.516192] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6a0de0/0x6e5380) succeed. 00:08:18.598 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.598 12:35:51 -- target/discovery.sh@26 -- # seq 1 4 00:08:18.598 12:35:51 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.598 12:35:51 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:18.598 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.598 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.598 Null1 00:08:18.598 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.598 12:35:51 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.598 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.598 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.598 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.598 12:35:51 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:18.598 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.598 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.598 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.598 12:35:51 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:18.598 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.598 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.860 [2024-11-20 12:35:51.704609] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:18.860 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.860 12:35:51 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.860 12:35:51 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:18.860 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.860 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.860 Null2 00:08:18.860 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.860 12:35:51 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:18.860 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.860 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.860 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.860 12:35:51 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:18.860 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.860 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.860 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.860 12:35:51 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:18.860 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.861 12:35:51 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:18.861 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 Null3 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:18.861 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:18.861 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:18.861 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.861 12:35:51 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:18.861 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 Null4 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:18.861 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:18.861 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:18.861 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:18.861 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:18.861 12:35:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.861 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 12:35:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.861 12:35:51 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 4420 00:08:19.123 00:08:19.123 Discovery Log Number of Records 6, Generation counter 6 00:08:19.123 =====Discovery Log Entry 0====== 00:08:19.123 trtype: rdma 00:08:19.123 adrfam: ipv4 00:08:19.123 subtype: current discovery subsystem 00:08:19.123 treq: not required 00:08:19.123 portid: 0 00:08:19.123 trsvcid: 4420 00:08:19.123 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:19.123 traddr: 192.168.100.8 00:08:19.123 eflags: explicit discovery connections, duplicate discovery information 00:08:19.123 rdma_prtype: not specified 00:08:19.123 rdma_qptype: connected 00:08:19.123 rdma_cms: rdma-cm 00:08:19.123 rdma_pkey: 0x0000 00:08:19.123 =====Discovery Log Entry 1====== 00:08:19.123 trtype: rdma 00:08:19.123 adrfam: ipv4 00:08:19.123 subtype: nvme subsystem 00:08:19.123 treq: not required 00:08:19.123 portid: 0 00:08:19.123 trsvcid: 4420 00:08:19.123 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:19.123 traddr: 192.168.100.8 00:08:19.123 eflags: none 00:08:19.123 rdma_prtype: not specified 00:08:19.123 rdma_qptype: connected 00:08:19.123 rdma_cms: rdma-cm 00:08:19.123 rdma_pkey: 0x0000 00:08:19.123 =====Discovery Log Entry 2====== 00:08:19.123 trtype: rdma 00:08:19.123 adrfam: ipv4 00:08:19.123 subtype: nvme subsystem 00:08:19.123 treq: not required 00:08:19.123 portid: 0 00:08:19.123 trsvcid: 4420 00:08:19.123 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:19.123 traddr: 192.168.100.8 00:08:19.123 eflags: none 00:08:19.123 rdma_prtype: not specified 00:08:19.123 rdma_qptype: connected 00:08:19.123 rdma_cms: rdma-cm 00:08:19.123 rdma_pkey: 0x0000 00:08:19.123 =====Discovery Log Entry 3====== 00:08:19.123 trtype: rdma 00:08:19.123 adrfam: ipv4 00:08:19.123 subtype: nvme subsystem 00:08:19.123 treq: not required 00:08:19.123 portid: 0 00:08:19.123 trsvcid: 4420 00:08:19.123 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:19.123 traddr: 192.168.100.8 00:08:19.123 eflags: none 00:08:19.123 rdma_prtype: not specified 00:08:19.123 rdma_qptype: connected 00:08:19.123 rdma_cms: rdma-cm 00:08:19.123 rdma_pkey: 0x0000 00:08:19.123 =====Discovery Log Entry 4====== 00:08:19.123 trtype: rdma 00:08:19.123 adrfam: ipv4 00:08:19.123 subtype: nvme subsystem 00:08:19.123 treq: not required 00:08:19.123 portid: 0 00:08:19.123 trsvcid: 4420 00:08:19.123 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:19.123 traddr: 192.168.100.8 00:08:19.123 eflags: none 00:08:19.123 rdma_prtype: not specified 00:08:19.123 rdma_qptype: connected 00:08:19.123 rdma_cms: rdma-cm 00:08:19.123 rdma_pkey: 0x0000 00:08:19.123 =====Discovery Log Entry 5====== 00:08:19.123 trtype: rdma 00:08:19.123 adrfam: ipv4 00:08:19.123 subtype: discovery subsystem referral 00:08:19.123 treq: not required 00:08:19.123 portid: 0 00:08:19.123 trsvcid: 4430 00:08:19.123 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:19.123 traddr: 192.168.100.8 00:08:19.123 eflags: none 00:08:19.123 rdma_prtype: unrecognized 00:08:19.123 rdma_qptype: unrecognized 00:08:19.123 rdma_cms: unrecognized 00:08:19.123 rdma_pkey: 0x0000 00:08:19.123 12:35:52 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:19.123 Perform nvmf subsystem discovery via RPC 00:08:19.123 12:35:52 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:19.123 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.123 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.123 [2024-11-20 12:35:52.013379] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:19.123 [ 00:08:19.123 { 00:08:19.123 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:19.124 "subtype": "Discovery", 00:08:19.124 "listen_addresses": [ 00:08:19.124 { 00:08:19.124 "transport": "RDMA", 00:08:19.124 "trtype": "RDMA", 00:08:19.124 "adrfam": "IPv4", 00:08:19.124 "traddr": "192.168.100.8", 00:08:19.124 "trsvcid": "4420" 00:08:19.124 } 00:08:19.124 ], 00:08:19.124 "allow_any_host": true, 00:08:19.124 "hosts": [] 00:08:19.124 }, 00:08:19.124 { 00:08:19.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.124 "subtype": "NVMe", 00:08:19.124 "listen_addresses": [ 00:08:19.124 { 00:08:19.124 "transport": "RDMA", 00:08:19.124 "trtype": "RDMA", 00:08:19.124 "adrfam": "IPv4", 00:08:19.124 "traddr": "192.168.100.8", 00:08:19.124 "trsvcid": "4420" 00:08:19.124 } 00:08:19.124 ], 00:08:19.124 "allow_any_host": true, 00:08:19.124 "hosts": [], 00:08:19.124 "serial_number": "SPDK00000000000001", 00:08:19.124 "model_number": "SPDK bdev Controller", 00:08:19.124 "max_namespaces": 32, 00:08:19.124 "min_cntlid": 1, 00:08:19.124 "max_cntlid": 65519, 00:08:19.124 "namespaces": [ 00:08:19.124 { 00:08:19.124 "nsid": 1, 00:08:19.124 "bdev_name": "Null1", 00:08:19.124 "name": "Null1", 00:08:19.124 "nguid": "56364A081B7740DBAB5EC2591A3091C9", 00:08:19.124 "uuid": "56364a08-1b77-40db-ab5e-c2591a3091c9" 00:08:19.124 } 00:08:19.124 ] 00:08:19.124 }, 00:08:19.124 { 00:08:19.124 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:19.124 "subtype": "NVMe", 00:08:19.124 "listen_addresses": [ 00:08:19.124 { 00:08:19.124 "transport": "RDMA", 00:08:19.124 "trtype": "RDMA", 00:08:19.124 "adrfam": "IPv4", 00:08:19.124 "traddr": "192.168.100.8", 00:08:19.124 "trsvcid": "4420" 00:08:19.124 } 00:08:19.124 ], 00:08:19.124 "allow_any_host": true, 00:08:19.124 "hosts": [], 00:08:19.124 "serial_number": "SPDK00000000000002", 00:08:19.124 "model_number": "SPDK bdev Controller", 00:08:19.124 "max_namespaces": 32, 00:08:19.124 "min_cntlid": 1, 00:08:19.124 "max_cntlid": 65519, 00:08:19.124 "namespaces": [ 00:08:19.124 { 00:08:19.124 "nsid": 1, 00:08:19.124 "bdev_name": "Null2", 00:08:19.124 "name": "Null2", 00:08:19.124 "nguid": "9965E8493FEB4C51B5A7D88288BB723E", 00:08:19.124 "uuid": "9965e849-3feb-4c51-b5a7-d88288bb723e" 00:08:19.124 } 00:08:19.124 ] 00:08:19.124 }, 00:08:19.124 { 00:08:19.124 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:19.124 "subtype": "NVMe", 00:08:19.124 "listen_addresses": [ 00:08:19.124 { 00:08:19.124 "transport": "RDMA", 00:08:19.124 "trtype": "RDMA", 00:08:19.124 "adrfam": "IPv4", 00:08:19.124 "traddr": "192.168.100.8", 00:08:19.124 "trsvcid": "4420" 00:08:19.124 } 00:08:19.124 ], 00:08:19.124 "allow_any_host": true, 00:08:19.124 "hosts": [], 00:08:19.124 "serial_number": "SPDK00000000000003", 00:08:19.124 "model_number": "SPDK bdev Controller", 00:08:19.124 "max_namespaces": 32, 00:08:19.124 "min_cntlid": 1, 00:08:19.124 "max_cntlid": 65519, 00:08:19.124 "namespaces": [ 00:08:19.124 { 00:08:19.124 "nsid": 1, 00:08:19.124 "bdev_name": "Null3", 00:08:19.124 "name": "Null3", 00:08:19.124 "nguid": "CA97440AA78148F19B6ED5AFC3856C44", 00:08:19.124 "uuid": "ca97440a-a781-48f1-9b6e-d5afc3856c44" 00:08:19.124 } 00:08:19.124 ] 00:08:19.124 }, 00:08:19.124 { 00:08:19.124 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:19.124 "subtype": "NVMe", 00:08:19.124 "listen_addresses": [ 00:08:19.124 { 00:08:19.124 "transport": "RDMA", 00:08:19.124 "trtype": "RDMA", 00:08:19.124 "adrfam": "IPv4", 00:08:19.124 "traddr": "192.168.100.8", 00:08:19.124 "trsvcid": "4420" 00:08:19.124 } 00:08:19.124 ], 00:08:19.124 "allow_any_host": true, 00:08:19.124 "hosts": [], 00:08:19.124 "serial_number": "SPDK00000000000004", 00:08:19.124 "model_number": "SPDK bdev Controller", 00:08:19.124 "max_namespaces": 32, 00:08:19.124 "min_cntlid": 1, 00:08:19.124 "max_cntlid": 65519, 00:08:19.124 "namespaces": [ 00:08:19.124 { 00:08:19.124 "nsid": 1, 00:08:19.124 "bdev_name": "Null4", 00:08:19.124 "name": "Null4", 00:08:19.124 "nguid": "A5D218B6E71042A8903571EC176942B2", 00:08:19.124 "uuid": "a5d218b6-e710-42a8-9035-71ec176942b2" 00:08:19.124 } 00:08:19.124 ] 00:08:19.124 } 00:08:19.124 ] 00:08:19.124 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.124 12:35:52 -- target/discovery.sh@42 -- # seq 1 4 00:08:19.124 12:35:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.124 12:35:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.124 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.124 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.124 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.124 12:35:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:19.124 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.124 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.124 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.124 12:35:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.124 12:35:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:19.124 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.124 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.124 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.124 12:35:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:19.124 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.124 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.124 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.124 12:35:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.124 12:35:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:19.124 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.124 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.124 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.124 12:35:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:19.124 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.124 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.124 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.124 12:35:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.124 12:35:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:19.124 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.124 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.124 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.124 12:35:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:19.124 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.124 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.124 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.125 12:35:52 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:19.125 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.125 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.125 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.125 12:35:52 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:19.125 12:35:52 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:19.125 12:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.125 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.125 12:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.125 12:35:52 -- target/discovery.sh@49 -- # check_bdevs= 00:08:19.125 12:35:52 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:19.125 12:35:52 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:19.125 12:35:52 -- target/discovery.sh@57 -- # nvmftestfini 00:08:19.125 12:35:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:19.125 12:35:52 -- nvmf/common.sh@116 -- # sync 00:08:19.125 12:35:52 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:19.125 12:35:52 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:19.125 12:35:52 -- nvmf/common.sh@119 -- # set +e 00:08:19.125 12:35:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:19.125 12:35:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:19.125 rmmod nvme_rdma 00:08:19.125 rmmod nvme_fabrics 00:08:19.386 12:35:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:19.386 12:35:52 -- nvmf/common.sh@123 -- # set -e 00:08:19.386 12:35:52 -- nvmf/common.sh@124 -- # return 0 00:08:19.386 12:35:52 -- nvmf/common.sh@477 -- # '[' -n 357210 ']' 00:08:19.386 12:35:52 -- nvmf/common.sh@478 -- # killprocess 357210 00:08:19.386 12:35:52 -- common/autotest_common.sh@936 -- # '[' -z 357210 ']' 00:08:19.386 12:35:52 -- common/autotest_common.sh@940 -- # kill -0 357210 00:08:19.386 12:35:52 -- common/autotest_common.sh@941 -- # uname 00:08:19.386 12:35:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:19.386 12:35:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 357210 00:08:19.386 12:35:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:19.386 12:35:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:19.386 12:35:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 357210' 00:08:19.386 killing process with pid 357210 00:08:19.386 12:35:52 -- common/autotest_common.sh@955 -- # kill 357210 00:08:19.386 [2024-11-20 12:35:52.308804] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:19.386 12:35:52 -- common/autotest_common.sh@960 -- # wait 357210 00:08:19.647 12:35:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:19.647 12:35:52 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:19.647 00:08:19.647 real 0m9.366s 00:08:19.647 user 0m9.276s 00:08:19.647 sys 0m5.789s 00:08:19.647 12:35:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.647 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.647 ************************************ 00:08:19.647 END TEST nvmf_discovery 00:08:19.647 ************************************ 00:08:19.647 12:35:52 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:19.647 12:35:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:19.647 12:35:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.647 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.647 ************************************ 00:08:19.647 START TEST nvmf_referrals 00:08:19.647 ************************************ 00:08:19.647 12:35:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:19.647 * Looking for test storage... 00:08:19.647 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:19.647 12:35:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:19.647 12:35:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:19.647 12:35:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:19.647 12:35:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:19.647 12:35:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:19.647 12:35:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:19.647 12:35:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:19.647 12:35:52 -- scripts/common.sh@335 -- # IFS=.-: 00:08:19.647 12:35:52 -- scripts/common.sh@335 -- # read -ra ver1 00:08:19.647 12:35:52 -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.647 12:35:52 -- scripts/common.sh@336 -- # read -ra ver2 00:08:19.647 12:35:52 -- scripts/common.sh@337 -- # local 'op=<' 00:08:19.647 12:35:52 -- scripts/common.sh@339 -- # ver1_l=2 00:08:19.647 12:35:52 -- scripts/common.sh@340 -- # ver2_l=1 00:08:19.647 12:35:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:19.647 12:35:52 -- scripts/common.sh@343 -- # case "$op" in 00:08:19.647 12:35:52 -- scripts/common.sh@344 -- # : 1 00:08:19.647 12:35:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:19.647 12:35:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.647 12:35:52 -- scripts/common.sh@364 -- # decimal 1 00:08:19.647 12:35:52 -- scripts/common.sh@352 -- # local d=1 00:08:19.647 12:35:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.647 12:35:52 -- scripts/common.sh@354 -- # echo 1 00:08:19.647 12:35:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:19.647 12:35:52 -- scripts/common.sh@365 -- # decimal 2 00:08:19.647 12:35:52 -- scripts/common.sh@352 -- # local d=2 00:08:19.647 12:35:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.007 12:35:52 -- scripts/common.sh@354 -- # echo 2 00:08:20.007 12:35:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:20.007 12:35:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:20.007 12:35:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:20.007 12:35:52 -- scripts/common.sh@367 -- # return 0 00:08:20.007 12:35:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.007 12:35:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:20.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.007 --rc genhtml_branch_coverage=1 00:08:20.007 --rc genhtml_function_coverage=1 00:08:20.007 --rc genhtml_legend=1 00:08:20.007 --rc geninfo_all_blocks=1 00:08:20.007 --rc geninfo_unexecuted_blocks=1 00:08:20.007 00:08:20.007 ' 00:08:20.007 12:35:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:20.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.007 --rc genhtml_branch_coverage=1 00:08:20.007 --rc genhtml_function_coverage=1 00:08:20.007 --rc genhtml_legend=1 00:08:20.007 --rc geninfo_all_blocks=1 00:08:20.007 --rc geninfo_unexecuted_blocks=1 00:08:20.007 00:08:20.007 ' 00:08:20.007 12:35:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:20.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.007 --rc genhtml_branch_coverage=1 00:08:20.007 --rc genhtml_function_coverage=1 00:08:20.007 --rc genhtml_legend=1 00:08:20.007 --rc geninfo_all_blocks=1 00:08:20.007 --rc geninfo_unexecuted_blocks=1 00:08:20.007 00:08:20.007 ' 00:08:20.007 12:35:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:20.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.007 --rc genhtml_branch_coverage=1 00:08:20.007 --rc genhtml_function_coverage=1 00:08:20.007 --rc genhtml_legend=1 00:08:20.007 --rc geninfo_all_blocks=1 00:08:20.007 --rc geninfo_unexecuted_blocks=1 00:08:20.007 00:08:20.007 ' 00:08:20.007 12:35:52 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.007 12:35:52 -- nvmf/common.sh@7 -- # uname -s 00:08:20.007 12:35:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.007 12:35:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.007 12:35:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.007 12:35:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.007 12:35:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.007 12:35:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.007 12:35:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.007 12:35:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.007 12:35:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.007 12:35:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.007 12:35:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:20.007 12:35:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:20.007 12:35:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.007 12:35:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.007 12:35:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.007 12:35:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:20.007 12:35:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.007 12:35:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.007 12:35:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.007 12:35:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.007 12:35:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.008 12:35:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.008 12:35:52 -- paths/export.sh@5 -- # export PATH 00:08:20.008 12:35:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.008 12:35:52 -- nvmf/common.sh@46 -- # : 0 00:08:20.008 12:35:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:20.008 12:35:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:20.008 12:35:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:20.008 12:35:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.008 12:35:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.008 12:35:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:20.008 12:35:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:20.008 12:35:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:20.008 12:35:52 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:20.008 12:35:52 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:20.008 12:35:52 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:20.008 12:35:52 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:20.008 12:35:52 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:20.008 12:35:52 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:20.008 12:35:52 -- target/referrals.sh@37 -- # nvmftestinit 00:08:20.008 12:35:52 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:20.008 12:35:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.008 12:35:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:20.008 12:35:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:20.008 12:35:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:20.008 12:35:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.008 12:35:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.008 12:35:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.008 12:35:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:20.008 12:35:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:20.008 12:35:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:20.008 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:08:26.937 12:35:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:26.937 12:35:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:26.937 12:35:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:26.937 12:35:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:26.937 12:35:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:26.937 12:35:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:26.937 12:35:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:26.937 12:35:59 -- nvmf/common.sh@294 -- # net_devs=() 00:08:26.937 12:35:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:26.937 12:35:59 -- nvmf/common.sh@295 -- # e810=() 00:08:26.937 12:35:59 -- nvmf/common.sh@295 -- # local -ga e810 00:08:26.937 12:35:59 -- nvmf/common.sh@296 -- # x722=() 00:08:26.937 12:35:59 -- nvmf/common.sh@296 -- # local -ga x722 00:08:26.937 12:35:59 -- nvmf/common.sh@297 -- # mlx=() 00:08:26.937 12:35:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:26.937 12:35:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.937 12:35:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:26.937 12:35:59 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:26.937 12:35:59 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:26.937 12:35:59 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:26.937 12:35:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:26.937 12:35:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:26.937 12:35:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:26.937 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:26.937 12:35:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:26.937 12:35:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:26.937 12:35:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:26.937 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:26.937 12:35:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:26.937 12:35:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:26.937 12:35:59 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:26.937 12:35:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.937 12:35:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:26.937 12:35:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.937 12:35:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:26.937 Found net devices under 0000:98:00.0: mlx_0_0 00:08:26.937 12:35:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.937 12:35:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:26.937 12:35:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.937 12:35:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:26.937 12:35:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.937 12:35:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:26.937 Found net devices under 0000:98:00.1: mlx_0_1 00:08:26.937 12:35:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.937 12:35:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:26.937 12:35:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:26.937 12:35:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:26.937 12:35:59 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:26.937 12:35:59 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:26.937 12:35:59 -- nvmf/common.sh@57 -- # uname 00:08:26.937 12:35:59 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:26.937 12:35:59 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:26.937 12:35:59 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:26.937 12:35:59 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:26.937 12:35:59 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:26.937 12:35:59 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:26.937 12:35:59 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:26.937 12:35:59 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:26.937 12:35:59 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:26.938 12:35:59 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:26.938 12:35:59 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:26.938 12:35:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:26.938 12:35:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:26.938 12:35:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:26.938 12:35:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:26.938 12:35:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:26.938 12:35:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:26.938 12:35:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.938 12:35:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:26.938 12:35:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:26.938 12:35:59 -- nvmf/common.sh@104 -- # continue 2 00:08:26.938 12:35:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:26.938 12:35:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.938 12:35:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:26.938 12:35:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.938 12:35:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:26.938 12:35:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:26.938 12:35:59 -- nvmf/common.sh@104 -- # continue 2 00:08:26.938 12:35:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:26.938 12:35:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:26.938 12:35:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:26.938 12:35:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:26.938 12:35:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:26.938 12:35:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:26.938 12:35:59 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:26.938 12:35:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:26.938 12:35:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:26.938 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:26.938 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:08:26.938 altname enp152s0f0np0 00:08:26.938 altname ens817f0np0 00:08:26.938 inet 192.168.100.8/24 scope global mlx_0_0 00:08:26.938 valid_lft forever preferred_lft forever 00:08:26.938 12:35:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:26.938 12:35:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:26.938 12:35:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:26.938 12:35:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:26.938 12:35:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:26.938 12:35:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:26.938 12:35:59 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:26.938 12:35:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:26.938 12:35:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:26.938 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:26.938 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:08:26.938 altname enp152s0f1np1 00:08:26.938 altname ens817f1np1 00:08:26.938 inet 192.168.100.9/24 scope global mlx_0_1 00:08:26.938 valid_lft forever preferred_lft forever 00:08:26.938 12:35:59 -- nvmf/common.sh@410 -- # return 0 00:08:26.938 12:35:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:26.938 12:35:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:26.938 12:35:59 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:26.938 12:35:59 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:26.938 12:35:59 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:26.938 12:35:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:26.938 12:35:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:26.938 12:35:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:26.938 12:35:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:26.938 12:36:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:26.938 12:36:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:26.938 12:36:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.938 12:36:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:26.938 12:36:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:26.938 12:36:00 -- nvmf/common.sh@104 -- # continue 2 00:08:26.938 12:36:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:26.938 12:36:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.938 12:36:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:26.938 12:36:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.938 12:36:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:26.938 12:36:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:26.938 12:36:00 -- nvmf/common.sh@104 -- # continue 2 00:08:26.938 12:36:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:26.938 12:36:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:26.938 12:36:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:26.938 12:36:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:26.938 12:36:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:26.938 12:36:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:26.938 12:36:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:26.938 12:36:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:26.938 12:36:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:26.938 12:36:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:26.938 12:36:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:26.938 12:36:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:27.234 12:36:00 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:27.234 192.168.100.9' 00:08:27.234 12:36:00 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:27.234 192.168.100.9' 00:08:27.234 12:36:00 -- nvmf/common.sh@445 -- # head -n 1 00:08:27.234 12:36:00 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:27.234 12:36:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:27.234 192.168.100.9' 00:08:27.234 12:36:00 -- nvmf/common.sh@446 -- # tail -n +2 00:08:27.234 12:36:00 -- nvmf/common.sh@446 -- # head -n 1 00:08:27.234 12:36:00 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:27.234 12:36:00 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:27.234 12:36:00 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:27.234 12:36:00 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:27.234 12:36:00 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:27.234 12:36:00 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:27.234 12:36:00 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:27.234 12:36:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:27.234 12:36:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.234 12:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:27.234 12:36:00 -- nvmf/common.sh@469 -- # nvmfpid=361349 00:08:27.234 12:36:00 -- nvmf/common.sh@470 -- # waitforlisten 361349 00:08:27.234 12:36:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.234 12:36:00 -- common/autotest_common.sh@829 -- # '[' -z 361349 ']' 00:08:27.234 12:36:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.234 12:36:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.234 12:36:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.234 12:36:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.234 12:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:27.234 [2024-11-20 12:36:00.141889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.234 [2024-11-20 12:36:00.141940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.234 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.234 [2024-11-20 12:36:00.203660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.234 [2024-11-20 12:36:00.267453] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.234 [2024-11-20 12:36:00.267583] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.234 [2024-11-20 12:36:00.267593] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.234 [2024-11-20 12:36:00.267602] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.234 [2024-11-20 12:36:00.267737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.234 [2024-11-20 12:36:00.267843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.234 [2024-11-20 12:36:00.268016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.234 [2024-11-20 12:36:00.268016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.846 12:36:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.846 12:36:00 -- common/autotest_common.sh@862 -- # return 0 00:08:27.846 12:36:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:27.846 12:36:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.846 12:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:28.108 12:36:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.108 12:36:00 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:28.108 12:36:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.108 12:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:28.108 [2024-11-20 12:36:01.004702] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15f37f0/0x15f7ce0) succeed. 00:08:28.108 [2024-11-20 12:36:01.019007] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15f4de0/0x1639380) succeed. 00:08:28.108 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.108 12:36:01 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:28.108 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.108 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.108 [2024-11-20 12:36:01.147030] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:28.108 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.108 12:36:01 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:28.108 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.108 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.108 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.108 12:36:01 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:28.108 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.108 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.108 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.108 12:36:01 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:28.108 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.108 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.108 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.108 12:36:01 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.108 12:36:01 -- target/referrals.sh@48 -- # jq length 00:08:28.108 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.108 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.108 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.369 12:36:01 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:28.369 12:36:01 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:28.369 12:36:01 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:28.369 12:36:01 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.369 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.369 12:36:01 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:28.369 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.369 12:36:01 -- target/referrals.sh@21 -- # sort 00:08:28.369 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.369 12:36:01 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:28.369 12:36:01 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:28.369 12:36:01 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:28.369 12:36:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.369 12:36:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.369 12:36:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.369 12:36:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:28.369 12:36:01 -- target/referrals.sh@26 -- # sort 00:08:28.369 12:36:01 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:28.369 12:36:01 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:28.369 12:36:01 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:28.369 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.369 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.369 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.369 12:36:01 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:28.369 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.369 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.369 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.369 12:36:01 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:28.369 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.369 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.369 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.369 12:36:01 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.369 12:36:01 -- target/referrals.sh@56 -- # jq length 00:08:28.369 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.369 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.369 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.630 12:36:01 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:28.630 12:36:01 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:28.630 12:36:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.630 12:36:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.630 12:36:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:28.630 12:36:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.630 12:36:01 -- target/referrals.sh@26 -- # sort 00:08:28.630 12:36:01 -- target/referrals.sh@26 -- # echo 00:08:28.630 12:36:01 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:28.630 12:36:01 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:28.630 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.630 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.630 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.630 12:36:01 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:28.630 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.630 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.630 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.630 12:36:01 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:28.630 12:36:01 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:28.630 12:36:01 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.630 12:36:01 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:28.630 12:36:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.630 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.630 12:36:01 -- target/referrals.sh@21 -- # sort 00:08:28.630 12:36:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.630 12:36:01 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:28.630 12:36:01 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:28.630 12:36:01 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:28.630 12:36:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.630 12:36:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.630 12:36:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:28.630 12:36:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.630 12:36:01 -- target/referrals.sh@26 -- # sort 00:08:28.892 12:36:01 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:28.892 12:36:01 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:28.892 12:36:01 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:28.892 12:36:01 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:28.892 12:36:01 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:28.892 12:36:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:28.892 12:36:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:28.892 12:36:01 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:28.892 12:36:01 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:28.892 12:36:01 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:28.892 12:36:01 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:28.892 12:36:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:28.892 12:36:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:29.153 12:36:02 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:29.153 12:36:02 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:29.153 12:36:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.153 12:36:02 -- common/autotest_common.sh@10 -- # set +x 00:08:29.153 12:36:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.153 12:36:02 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:29.153 12:36:02 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:29.153 12:36:02 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.153 12:36:02 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:29.153 12:36:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.153 12:36:02 -- target/referrals.sh@21 -- # sort 00:08:29.154 12:36:02 -- common/autotest_common.sh@10 -- # set +x 00:08:29.154 12:36:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.154 12:36:02 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:29.154 12:36:02 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:29.154 12:36:02 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:29.154 12:36:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.154 12:36:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.154 12:36:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:29.154 12:36:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.154 12:36:02 -- target/referrals.sh@26 -- # sort 00:08:29.154 12:36:02 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:29.154 12:36:02 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:29.154 12:36:02 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:29.154 12:36:02 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:29.154 12:36:02 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:29.154 12:36:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:29.154 12:36:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:29.415 12:36:02 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:29.415 12:36:02 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:29.415 12:36:02 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:29.415 12:36:02 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:29.415 12:36:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:29.415 12:36:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:29.415 12:36:02 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:29.415 12:36:02 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:29.415 12:36:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.415 12:36:02 -- common/autotest_common.sh@10 -- # set +x 00:08:29.415 12:36:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.415 12:36:02 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.415 12:36:02 -- target/referrals.sh@82 -- # jq length 00:08:29.415 12:36:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.415 12:36:02 -- common/autotest_common.sh@10 -- # set +x 00:08:29.415 12:36:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.675 12:36:02 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:29.675 12:36:02 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:29.675 12:36:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.675 12:36:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.675 12:36:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:29.675 12:36:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.675 12:36:02 -- target/referrals.sh@26 -- # sort 00:08:29.675 12:36:02 -- target/referrals.sh@26 -- # echo 00:08:29.675 12:36:02 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:29.675 12:36:02 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:29.675 12:36:02 -- target/referrals.sh@86 -- # nvmftestfini 00:08:29.675 12:36:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:29.675 12:36:02 -- nvmf/common.sh@116 -- # sync 00:08:29.675 12:36:02 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:29.675 12:36:02 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:29.675 12:36:02 -- nvmf/common.sh@119 -- # set +e 00:08:29.675 12:36:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:29.675 12:36:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:29.675 rmmod nvme_rdma 00:08:29.675 rmmod nvme_fabrics 00:08:29.675 12:36:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:29.675 12:36:02 -- nvmf/common.sh@123 -- # set -e 00:08:29.675 12:36:02 -- nvmf/common.sh@124 -- # return 0 00:08:29.675 12:36:02 -- nvmf/common.sh@477 -- # '[' -n 361349 ']' 00:08:29.676 12:36:02 -- nvmf/common.sh@478 -- # killprocess 361349 00:08:29.676 12:36:02 -- common/autotest_common.sh@936 -- # '[' -z 361349 ']' 00:08:29.676 12:36:02 -- common/autotest_common.sh@940 -- # kill -0 361349 00:08:29.676 12:36:02 -- common/autotest_common.sh@941 -- # uname 00:08:29.676 12:36:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:29.676 12:36:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 361349 00:08:29.936 12:36:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:29.936 12:36:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:29.936 12:36:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 361349' 00:08:29.936 killing process with pid 361349 00:08:29.936 12:36:02 -- common/autotest_common.sh@955 -- # kill 361349 00:08:29.936 12:36:02 -- common/autotest_common.sh@960 -- # wait 361349 00:08:29.936 12:36:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:29.936 12:36:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:29.936 00:08:29.936 real 0m10.435s 00:08:29.936 user 0m13.933s 00:08:29.936 sys 0m6.196s 00:08:29.936 12:36:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.936 12:36:03 -- common/autotest_common.sh@10 -- # set +x 00:08:29.936 ************************************ 00:08:29.936 END TEST nvmf_referrals 00:08:29.936 ************************************ 00:08:29.936 12:36:03 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:29.936 12:36:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:29.936 12:36:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.198 12:36:03 -- common/autotest_common.sh@10 -- # set +x 00:08:30.198 ************************************ 00:08:30.198 START TEST nvmf_connect_disconnect 00:08:30.198 ************************************ 00:08:30.198 12:36:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:30.198 * Looking for test storage... 00:08:30.198 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:30.198 12:36:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:30.198 12:36:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:30.198 12:36:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:30.198 12:36:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:30.198 12:36:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:30.198 12:36:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:30.198 12:36:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:30.198 12:36:03 -- scripts/common.sh@335 -- # IFS=.-: 00:08:30.198 12:36:03 -- scripts/common.sh@335 -- # read -ra ver1 00:08:30.198 12:36:03 -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.198 12:36:03 -- scripts/common.sh@336 -- # read -ra ver2 00:08:30.198 12:36:03 -- scripts/common.sh@337 -- # local 'op=<' 00:08:30.198 12:36:03 -- scripts/common.sh@339 -- # ver1_l=2 00:08:30.198 12:36:03 -- scripts/common.sh@340 -- # ver2_l=1 00:08:30.198 12:36:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:30.198 12:36:03 -- scripts/common.sh@343 -- # case "$op" in 00:08:30.198 12:36:03 -- scripts/common.sh@344 -- # : 1 00:08:30.198 12:36:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:30.198 12:36:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.198 12:36:03 -- scripts/common.sh@364 -- # decimal 1 00:08:30.198 12:36:03 -- scripts/common.sh@352 -- # local d=1 00:08:30.198 12:36:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.198 12:36:03 -- scripts/common.sh@354 -- # echo 1 00:08:30.198 12:36:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:30.198 12:36:03 -- scripts/common.sh@365 -- # decimal 2 00:08:30.198 12:36:03 -- scripts/common.sh@352 -- # local d=2 00:08:30.198 12:36:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.198 12:36:03 -- scripts/common.sh@354 -- # echo 2 00:08:30.198 12:36:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:30.198 12:36:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:30.198 12:36:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:30.198 12:36:03 -- scripts/common.sh@367 -- # return 0 00:08:30.198 12:36:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.198 12:36:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:30.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.198 --rc genhtml_branch_coverage=1 00:08:30.198 --rc genhtml_function_coverage=1 00:08:30.198 --rc genhtml_legend=1 00:08:30.198 --rc geninfo_all_blocks=1 00:08:30.198 --rc geninfo_unexecuted_blocks=1 00:08:30.198 00:08:30.198 ' 00:08:30.198 12:36:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:30.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.198 --rc genhtml_branch_coverage=1 00:08:30.198 --rc genhtml_function_coverage=1 00:08:30.198 --rc genhtml_legend=1 00:08:30.198 --rc geninfo_all_blocks=1 00:08:30.198 --rc geninfo_unexecuted_blocks=1 00:08:30.198 00:08:30.198 ' 00:08:30.198 12:36:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:30.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.198 --rc genhtml_branch_coverage=1 00:08:30.198 --rc genhtml_function_coverage=1 00:08:30.198 --rc genhtml_legend=1 00:08:30.198 --rc geninfo_all_blocks=1 00:08:30.198 --rc geninfo_unexecuted_blocks=1 00:08:30.198 00:08:30.198 ' 00:08:30.198 12:36:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:30.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.198 --rc genhtml_branch_coverage=1 00:08:30.198 --rc genhtml_function_coverage=1 00:08:30.198 --rc genhtml_legend=1 00:08:30.198 --rc geninfo_all_blocks=1 00:08:30.198 --rc geninfo_unexecuted_blocks=1 00:08:30.198 00:08:30.198 ' 00:08:30.198 12:36:03 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.198 12:36:03 -- nvmf/common.sh@7 -- # uname -s 00:08:30.198 12:36:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.198 12:36:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.198 12:36:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.198 12:36:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.198 12:36:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.198 12:36:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.198 12:36:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.198 12:36:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.198 12:36:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.198 12:36:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.198 12:36:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:30.198 12:36:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:30.198 12:36:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.198 12:36:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.198 12:36:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.198 12:36:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:30.198 12:36:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.198 12:36:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.198 12:36:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.198 12:36:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.198 12:36:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.198 12:36:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.198 12:36:03 -- paths/export.sh@5 -- # export PATH 00:08:30.198 12:36:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.198 12:36:03 -- nvmf/common.sh@46 -- # : 0 00:08:30.198 12:36:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:30.198 12:36:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:30.198 12:36:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:30.198 12:36:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.198 12:36:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.198 12:36:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:30.198 12:36:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:30.198 12:36:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:30.198 12:36:03 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.198 12:36:03 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.198 12:36:03 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:30.198 12:36:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:30.198 12:36:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.198 12:36:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:30.198 12:36:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:30.198 12:36:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:30.198 12:36:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.198 12:36:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.198 12:36:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.198 12:36:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:30.198 12:36:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:30.198 12:36:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:30.198 12:36:03 -- common/autotest_common.sh@10 -- # set +x 00:08:38.342 12:36:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:38.342 12:36:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:38.342 12:36:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:38.342 12:36:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:38.342 12:36:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:38.342 12:36:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:38.342 12:36:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:38.342 12:36:10 -- nvmf/common.sh@294 -- # net_devs=() 00:08:38.342 12:36:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:38.342 12:36:10 -- nvmf/common.sh@295 -- # e810=() 00:08:38.342 12:36:10 -- nvmf/common.sh@295 -- # local -ga e810 00:08:38.342 12:36:10 -- nvmf/common.sh@296 -- # x722=() 00:08:38.342 12:36:10 -- nvmf/common.sh@296 -- # local -ga x722 00:08:38.342 12:36:10 -- nvmf/common.sh@297 -- # mlx=() 00:08:38.342 12:36:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:38.342 12:36:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.342 12:36:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:38.342 12:36:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:38.342 12:36:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:38.342 12:36:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:38.342 12:36:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:38.342 12:36:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.342 12:36:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:38.342 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:38.342 12:36:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.342 12:36:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.342 12:36:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:38.342 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:38.342 12:36:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.342 12:36:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.343 12:36:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:38.343 12:36:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.343 12:36:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.343 12:36:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.343 12:36:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:38.343 Found net devices under 0000:98:00.0: mlx_0_0 00:08:38.343 12:36:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.343 12:36:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.343 12:36:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.343 12:36:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.343 12:36:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:38.343 Found net devices under 0000:98:00.1: mlx_0_1 00:08:38.343 12:36:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.343 12:36:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:38.343 12:36:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:38.343 12:36:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:38.343 12:36:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:38.343 12:36:10 -- nvmf/common.sh@57 -- # uname 00:08:38.343 12:36:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:38.343 12:36:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:38.343 12:36:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:38.343 12:36:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:38.343 12:36:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:38.343 12:36:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:38.343 12:36:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:38.343 12:36:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:38.343 12:36:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:38.343 12:36:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:38.343 12:36:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:38.343 12:36:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.343 12:36:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:38.343 12:36:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:38.343 12:36:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.343 12:36:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:38.343 12:36:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:38.343 12:36:10 -- nvmf/common.sh@104 -- # continue 2 00:08:38.343 12:36:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:38.343 12:36:10 -- nvmf/common.sh@104 -- # continue 2 00:08:38.343 12:36:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:38.343 12:36:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:38.343 12:36:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.343 12:36:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:38.343 12:36:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:38.343 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.343 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:08:38.343 altname enp152s0f0np0 00:08:38.343 altname ens817f0np0 00:08:38.343 inet 192.168.100.8/24 scope global mlx_0_0 00:08:38.343 valid_lft forever preferred_lft forever 00:08:38.343 12:36:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:38.343 12:36:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:38.343 12:36:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.343 12:36:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:38.343 12:36:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:38.343 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.343 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:08:38.343 altname enp152s0f1np1 00:08:38.343 altname ens817f1np1 00:08:38.343 inet 192.168.100.9/24 scope global mlx_0_1 00:08:38.343 valid_lft forever preferred_lft forever 00:08:38.343 12:36:10 -- nvmf/common.sh@410 -- # return 0 00:08:38.343 12:36:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:38.343 12:36:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:38.343 12:36:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:38.343 12:36:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:38.343 12:36:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.343 12:36:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:38.343 12:36:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:38.343 12:36:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.343 12:36:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:38.343 12:36:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:38.343 12:36:10 -- nvmf/common.sh@104 -- # continue 2 00:08:38.343 12:36:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.343 12:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.343 12:36:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:38.343 12:36:10 -- nvmf/common.sh@104 -- # continue 2 00:08:38.343 12:36:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:38.343 12:36:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:38.343 12:36:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.343 12:36:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:38.343 12:36:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:38.343 12:36:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.343 12:36:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.343 12:36:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:38.343 192.168.100.9' 00:08:38.343 12:36:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:38.343 192.168.100.9' 00:08:38.343 12:36:10 -- nvmf/common.sh@445 -- # head -n 1 00:08:38.343 12:36:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:38.343 12:36:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:38.343 192.168.100.9' 00:08:38.343 12:36:10 -- nvmf/common.sh@446 -- # tail -n +2 00:08:38.343 12:36:10 -- nvmf/common.sh@446 -- # head -n 1 00:08:38.343 12:36:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:38.343 12:36:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:38.343 12:36:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:38.343 12:36:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:38.343 12:36:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:38.343 12:36:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:38.343 12:36:10 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:38.343 12:36:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.343 12:36:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.343 12:36:10 -- common/autotest_common.sh@10 -- # set +x 00:08:38.343 12:36:10 -- nvmf/common.sh@469 -- # nvmfpid=366153 00:08:38.343 12:36:10 -- nvmf/common.sh@470 -- # waitforlisten 366153 00:08:38.343 12:36:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.343 12:36:10 -- common/autotest_common.sh@829 -- # '[' -z 366153 ']' 00:08:38.343 12:36:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.343 12:36:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.343 12:36:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.343 12:36:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.343 12:36:10 -- common/autotest_common.sh@10 -- # set +x 00:08:38.343 [2024-11-20 12:36:10.583806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.343 [2024-11-20 12:36:10.583877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.343 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.343 [2024-11-20 12:36:10.653571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.343 [2024-11-20 12:36:10.727598] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.344 [2024-11-20 12:36:10.727738] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.344 [2024-11-20 12:36:10.727748] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.344 [2024-11-20 12:36:10.727757] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.344 [2024-11-20 12:36:10.727933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.344 [2024-11-20 12:36:10.728046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.344 [2024-11-20 12:36:10.728282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.344 [2024-11-20 12:36:10.728283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.344 12:36:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.344 12:36:11 -- common/autotest_common.sh@862 -- # return 0 00:08:38.344 12:36:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:38.344 12:36:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.344 12:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:38.344 12:36:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.344 12:36:11 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:38.344 12:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.344 12:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:38.344 [2024-11-20 12:36:11.430273] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:38.606 [2024-11-20 12:36:11.460114] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xeab7f0/0xeafce0) succeed. 00:08:38.606 [2024-11-20 12:36:11.473659] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeacde0/0xef1380) succeed. 00:08:38.606 12:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.606 12:36:11 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:38.606 12:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.606 12:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:38.606 12:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.606 12:36:11 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:38.606 12:36:11 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:38.606 12:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.606 12:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:38.606 12:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.606 12:36:11 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.606 12:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.606 12:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:38.606 12:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.606 12:36:11 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:38.606 12:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.606 12:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:38.606 [2024-11-20 12:36:11.630470] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:38.606 12:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.606 12:36:11 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:38.606 12:36:11 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:38.606 12:36:11 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:38.606 12:36:11 -- target/connect_disconnect.sh@34 -- # set +x 00:08:42.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.560 12:42:09 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:36.560 12:42:09 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:36.560 12:42:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:36.560 12:42:09 -- nvmf/common.sh@116 -- # sync 00:14:36.560 12:42:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:36.560 12:42:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:36.560 12:42:09 -- nvmf/common.sh@119 -- # set +e 00:14:36.560 12:42:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:36.560 12:42:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:36.560 rmmod nvme_rdma 00:14:36.560 rmmod nvme_fabrics 00:14:36.560 12:42:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:36.560 12:42:09 -- nvmf/common.sh@123 -- # set -e 00:14:36.560 12:42:09 -- nvmf/common.sh@124 -- # return 0 00:14:36.560 12:42:09 -- nvmf/common.sh@477 -- # '[' -n 366153 ']' 00:14:36.560 12:42:09 -- nvmf/common.sh@478 -- # killprocess 366153 00:14:36.560 12:42:09 -- common/autotest_common.sh@936 -- # '[' -z 366153 ']' 00:14:36.560 12:42:09 -- common/autotest_common.sh@940 -- # kill -0 366153 00:14:36.560 12:42:09 -- common/autotest_common.sh@941 -- # uname 00:14:36.560 12:42:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:36.560 12:42:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 366153 00:14:36.560 12:42:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:36.560 12:42:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:36.560 12:42:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 366153' 00:14:36.560 killing process with pid 366153 00:14:36.560 12:42:09 -- common/autotest_common.sh@955 -- # kill 366153 00:14:36.560 12:42:09 -- common/autotest_common.sh@960 -- # wait 366153 00:14:36.822 12:42:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:36.822 12:42:09 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:36.822 00:14:36.822 real 6m6.698s 00:14:36.822 user 23m51.678s 00:14:36.822 sys 0m18.709s 00:14:36.822 12:42:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:36.822 12:42:09 -- common/autotest_common.sh@10 -- # set +x 00:14:36.822 ************************************ 00:14:36.822 END TEST nvmf_connect_disconnect 00:14:36.822 ************************************ 00:14:36.822 12:42:09 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:36.822 12:42:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:36.822 12:42:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.822 12:42:09 -- common/autotest_common.sh@10 -- # set +x 00:14:36.822 ************************************ 00:14:36.822 START TEST nvmf_multitarget 00:14:36.822 ************************************ 00:14:36.822 12:42:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:36.822 * Looking for test storage... 00:14:36.822 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:36.822 12:42:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:36.822 12:42:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:36.822 12:42:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:37.083 12:42:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:37.083 12:42:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:37.083 12:42:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:37.083 12:42:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:37.083 12:42:09 -- scripts/common.sh@335 -- # IFS=.-: 00:14:37.083 12:42:09 -- scripts/common.sh@335 -- # read -ra ver1 00:14:37.083 12:42:09 -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.083 12:42:09 -- scripts/common.sh@336 -- # read -ra ver2 00:14:37.083 12:42:09 -- scripts/common.sh@337 -- # local 'op=<' 00:14:37.083 12:42:09 -- scripts/common.sh@339 -- # ver1_l=2 00:14:37.083 12:42:09 -- scripts/common.sh@340 -- # ver2_l=1 00:14:37.083 12:42:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:37.083 12:42:09 -- scripts/common.sh@343 -- # case "$op" in 00:14:37.083 12:42:09 -- scripts/common.sh@344 -- # : 1 00:14:37.083 12:42:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:37.083 12:42:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.083 12:42:09 -- scripts/common.sh@364 -- # decimal 1 00:14:37.083 12:42:09 -- scripts/common.sh@352 -- # local d=1 00:14:37.083 12:42:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.083 12:42:09 -- scripts/common.sh@354 -- # echo 1 00:14:37.083 12:42:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:37.083 12:42:09 -- scripts/common.sh@365 -- # decimal 2 00:14:37.083 12:42:09 -- scripts/common.sh@352 -- # local d=2 00:14:37.083 12:42:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.083 12:42:09 -- scripts/common.sh@354 -- # echo 2 00:14:37.083 12:42:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:37.083 12:42:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:37.083 12:42:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:37.083 12:42:09 -- scripts/common.sh@367 -- # return 0 00:14:37.083 12:42:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.083 12:42:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:37.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.083 --rc genhtml_branch_coverage=1 00:14:37.083 --rc genhtml_function_coverage=1 00:14:37.083 --rc genhtml_legend=1 00:14:37.083 --rc geninfo_all_blocks=1 00:14:37.083 --rc geninfo_unexecuted_blocks=1 00:14:37.083 00:14:37.083 ' 00:14:37.083 12:42:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:37.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.083 --rc genhtml_branch_coverage=1 00:14:37.083 --rc genhtml_function_coverage=1 00:14:37.083 --rc genhtml_legend=1 00:14:37.083 --rc geninfo_all_blocks=1 00:14:37.083 --rc geninfo_unexecuted_blocks=1 00:14:37.083 00:14:37.083 ' 00:14:37.083 12:42:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:37.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.083 --rc genhtml_branch_coverage=1 00:14:37.083 --rc genhtml_function_coverage=1 00:14:37.084 --rc genhtml_legend=1 00:14:37.084 --rc geninfo_all_blocks=1 00:14:37.084 --rc geninfo_unexecuted_blocks=1 00:14:37.084 00:14:37.084 ' 00:14:37.084 12:42:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:37.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.084 --rc genhtml_branch_coverage=1 00:14:37.084 --rc genhtml_function_coverage=1 00:14:37.084 --rc genhtml_legend=1 00:14:37.084 --rc geninfo_all_blocks=1 00:14:37.084 --rc geninfo_unexecuted_blocks=1 00:14:37.084 00:14:37.084 ' 00:14:37.084 12:42:09 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.084 12:42:09 -- nvmf/common.sh@7 -- # uname -s 00:14:37.084 12:42:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.084 12:42:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.084 12:42:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.084 12:42:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.084 12:42:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.084 12:42:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.084 12:42:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.084 12:42:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.084 12:42:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.084 12:42:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.084 12:42:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:37.084 12:42:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:37.084 12:42:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.084 12:42:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.084 12:42:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.084 12:42:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:37.084 12:42:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.084 12:42:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.084 12:42:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.084 12:42:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.084 12:42:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.084 12:42:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.084 12:42:10 -- paths/export.sh@5 -- # export PATH 00:14:37.084 12:42:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.084 12:42:10 -- nvmf/common.sh@46 -- # : 0 00:14:37.084 12:42:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:37.084 12:42:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:37.084 12:42:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:37.084 12:42:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.084 12:42:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.084 12:42:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:37.084 12:42:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:37.084 12:42:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:37.084 12:42:10 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:37.084 12:42:10 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:37.084 12:42:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:37.084 12:42:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.084 12:42:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:37.084 12:42:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:37.084 12:42:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:37.084 12:42:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.084 12:42:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.084 12:42:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.084 12:42:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:37.084 12:42:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:37.084 12:42:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:37.084 12:42:10 -- common/autotest_common.sh@10 -- # set +x 00:14:45.234 12:42:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:45.234 12:42:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:45.234 12:42:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:45.234 12:42:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:45.234 12:42:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:45.234 12:42:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:45.234 12:42:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:45.234 12:42:16 -- nvmf/common.sh@294 -- # net_devs=() 00:14:45.234 12:42:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:45.234 12:42:16 -- nvmf/common.sh@295 -- # e810=() 00:14:45.234 12:42:16 -- nvmf/common.sh@295 -- # local -ga e810 00:14:45.234 12:42:16 -- nvmf/common.sh@296 -- # x722=() 00:14:45.234 12:42:16 -- nvmf/common.sh@296 -- # local -ga x722 00:14:45.234 12:42:16 -- nvmf/common.sh@297 -- # mlx=() 00:14:45.234 12:42:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:45.234 12:42:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.234 12:42:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:45.234 12:42:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:45.234 12:42:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:45.234 12:42:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:45.234 12:42:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:45.234 12:42:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:45.234 12:42:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:45.234 12:42:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:45.234 12:42:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:14:45.234 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:14:45.234 12:42:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:45.234 12:42:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:45.234 12:42:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:45.234 12:42:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:45.234 12:42:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:45.234 12:42:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:45.234 12:42:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:45.234 12:42:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:14:45.234 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:14:45.234 12:42:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:45.234 12:42:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:45.234 12:42:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:45.235 12:42:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:45.235 12:42:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:45.235 12:42:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.235 12:42:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:45.235 12:42:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.235 12:42:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:14:45.235 Found net devices under 0000:98:00.0: mlx_0_0 00:14:45.235 12:42:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.235 12:42:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:45.235 12:42:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.235 12:42:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:45.235 12:42:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.235 12:42:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:14:45.235 Found net devices under 0000:98:00.1: mlx_0_1 00:14:45.235 12:42:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.235 12:42:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:45.235 12:42:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:45.235 12:42:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:45.235 12:42:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:45.235 12:42:16 -- nvmf/common.sh@57 -- # uname 00:14:45.235 12:42:16 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:45.235 12:42:16 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:45.235 12:42:16 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:45.235 12:42:16 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:45.235 12:42:16 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:45.235 12:42:16 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:45.235 12:42:16 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:45.235 12:42:16 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:45.235 12:42:16 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:45.235 12:42:16 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:45.235 12:42:16 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:45.235 12:42:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:45.235 12:42:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:45.235 12:42:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:45.235 12:42:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:45.235 12:42:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:45.235 12:42:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:45.235 12:42:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.235 12:42:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:45.235 12:42:16 -- nvmf/common.sh@104 -- # continue 2 00:14:45.235 12:42:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:45.235 12:42:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.235 12:42:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.235 12:42:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:45.235 12:42:16 -- nvmf/common.sh@104 -- # continue 2 00:14:45.235 12:42:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:45.235 12:42:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:45.235 12:42:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:45.235 12:42:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:45.235 12:42:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:45.235 12:42:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:45.235 12:42:16 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:45.235 12:42:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:45.235 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:45.235 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:14:45.235 altname enp152s0f0np0 00:14:45.235 altname ens817f0np0 00:14:45.235 inet 192.168.100.8/24 scope global mlx_0_0 00:14:45.235 valid_lft forever preferred_lft forever 00:14:45.235 12:42:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:45.235 12:42:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:45.235 12:42:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:45.235 12:42:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:45.235 12:42:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:45.235 12:42:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:45.235 12:42:16 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:45.235 12:42:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:45.235 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:45.235 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:14:45.235 altname enp152s0f1np1 00:14:45.235 altname ens817f1np1 00:14:45.235 inet 192.168.100.9/24 scope global mlx_0_1 00:14:45.235 valid_lft forever preferred_lft forever 00:14:45.235 12:42:16 -- nvmf/common.sh@410 -- # return 0 00:14:45.235 12:42:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:45.235 12:42:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:45.235 12:42:16 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:45.235 12:42:16 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:45.235 12:42:16 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:45.235 12:42:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:45.235 12:42:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:45.235 12:42:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:45.235 12:42:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:45.235 12:42:17 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:45.235 12:42:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:45.235 12:42:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.235 12:42:17 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:45.235 12:42:17 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:45.235 12:42:17 -- nvmf/common.sh@104 -- # continue 2 00:14:45.235 12:42:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:45.235 12:42:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.235 12:42:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:45.235 12:42:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:45.235 12:42:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:45.235 12:42:17 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:45.235 12:42:17 -- nvmf/common.sh@104 -- # continue 2 00:14:45.235 12:42:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:45.235 12:42:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:45.235 12:42:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:45.235 12:42:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:45.235 12:42:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:45.235 12:42:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:45.235 12:42:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:45.235 12:42:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:45.235 12:42:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:45.235 12:42:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:45.235 12:42:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:45.235 12:42:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:45.235 12:42:17 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:45.235 192.168.100.9' 00:14:45.235 12:42:17 -- nvmf/common.sh@445 -- # head -n 1 00:14:45.235 12:42:17 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:45.235 192.168.100.9' 00:14:45.235 12:42:17 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:45.235 12:42:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:45.235 192.168.100.9' 00:14:45.235 12:42:17 -- nvmf/common.sh@446 -- # tail -n +2 00:14:45.235 12:42:17 -- nvmf/common.sh@446 -- # head -n 1 00:14:45.235 12:42:17 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:45.235 12:42:17 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:45.235 12:42:17 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:45.235 12:42:17 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:45.235 12:42:17 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:45.235 12:42:17 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:45.235 12:42:17 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:45.235 12:42:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:45.235 12:42:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:45.235 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:14:45.235 12:42:17 -- nvmf/common.sh@469 -- # nvmfpid=444067 00:14:45.235 12:42:17 -- nvmf/common.sh@470 -- # waitforlisten 444067 00:14:45.235 12:42:17 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.235 12:42:17 -- common/autotest_common.sh@829 -- # '[' -z 444067 ']' 00:14:45.235 12:42:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.235 12:42:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.235 12:42:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.235 12:42:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.235 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:14:45.235 [2024-11-20 12:42:17.142381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:45.235 [2024-11-20 12:42:17.142443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.235 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.235 [2024-11-20 12:42:17.206604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.235 [2024-11-20 12:42:17.270232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:45.235 [2024-11-20 12:42:17.270360] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.236 [2024-11-20 12:42:17.270370] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.236 [2024-11-20 12:42:17.270378] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.236 [2024-11-20 12:42:17.270516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.236 [2024-11-20 12:42:17.270633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.236 [2024-11-20 12:42:17.270789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.236 [2024-11-20 12:42:17.270791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.236 12:42:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.236 12:42:17 -- common/autotest_common.sh@862 -- # return 0 00:14:45.236 12:42:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:45.236 12:42:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:45.236 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:14:45.236 12:42:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.236 12:42:17 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:45.236 12:42:17 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:45.236 12:42:17 -- target/multitarget.sh@21 -- # jq length 00:14:45.236 12:42:18 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:45.236 12:42:18 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:45.236 "nvmf_tgt_1" 00:14:45.236 12:42:18 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:45.236 "nvmf_tgt_2" 00:14:45.236 12:42:18 -- target/multitarget.sh@28 -- # jq length 00:14:45.236 12:42:18 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:45.497 12:42:18 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:45.497 12:42:18 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:45.497 true 00:14:45.497 12:42:18 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:45.497 true 00:14:45.497 12:42:18 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:45.497 12:42:18 -- target/multitarget.sh@35 -- # jq length 00:14:45.759 12:42:18 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:45.759 12:42:18 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:45.759 12:42:18 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:45.759 12:42:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:45.759 12:42:18 -- nvmf/common.sh@116 -- # sync 00:14:45.759 12:42:18 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:45.759 12:42:18 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:45.759 12:42:18 -- nvmf/common.sh@119 -- # set +e 00:14:45.759 12:42:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:45.759 12:42:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:45.759 rmmod nvme_rdma 00:14:45.759 rmmod nvme_fabrics 00:14:45.759 12:42:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:45.759 12:42:18 -- nvmf/common.sh@123 -- # set -e 00:14:45.759 12:42:18 -- nvmf/common.sh@124 -- # return 0 00:14:45.759 12:42:18 -- nvmf/common.sh@477 -- # '[' -n 444067 ']' 00:14:45.760 12:42:18 -- nvmf/common.sh@478 -- # killprocess 444067 00:14:45.760 12:42:18 -- common/autotest_common.sh@936 -- # '[' -z 444067 ']' 00:14:45.760 12:42:18 -- common/autotest_common.sh@940 -- # kill -0 444067 00:14:45.760 12:42:18 -- common/autotest_common.sh@941 -- # uname 00:14:45.760 12:42:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:45.760 12:42:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 444067 00:14:45.760 12:42:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:45.760 12:42:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:45.760 12:42:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 444067' 00:14:45.760 killing process with pid 444067 00:14:45.760 12:42:18 -- common/autotest_common.sh@955 -- # kill 444067 00:14:45.760 12:42:18 -- common/autotest_common.sh@960 -- # wait 444067 00:14:46.021 12:42:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:46.021 12:42:18 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:46.021 00:14:46.021 real 0m9.130s 00:14:46.021 user 0m9.561s 00:14:46.021 sys 0m5.639s 00:14:46.021 12:42:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:46.021 12:42:18 -- common/autotest_common.sh@10 -- # set +x 00:14:46.021 ************************************ 00:14:46.021 END TEST nvmf_multitarget 00:14:46.021 ************************************ 00:14:46.021 12:42:18 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:46.021 12:42:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:46.021 12:42:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.021 12:42:18 -- common/autotest_common.sh@10 -- # set +x 00:14:46.021 ************************************ 00:14:46.021 START TEST nvmf_rpc 00:14:46.021 ************************************ 00:14:46.021 12:42:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:46.021 * Looking for test storage... 00:14:46.021 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:46.021 12:42:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:46.021 12:42:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:46.021 12:42:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:46.281 12:42:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:46.281 12:42:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:46.281 12:42:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:46.281 12:42:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:46.281 12:42:19 -- scripts/common.sh@335 -- # IFS=.-: 00:14:46.281 12:42:19 -- scripts/common.sh@335 -- # read -ra ver1 00:14:46.281 12:42:19 -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.281 12:42:19 -- scripts/common.sh@336 -- # read -ra ver2 00:14:46.281 12:42:19 -- scripts/common.sh@337 -- # local 'op=<' 00:14:46.281 12:42:19 -- scripts/common.sh@339 -- # ver1_l=2 00:14:46.281 12:42:19 -- scripts/common.sh@340 -- # ver2_l=1 00:14:46.281 12:42:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:46.281 12:42:19 -- scripts/common.sh@343 -- # case "$op" in 00:14:46.281 12:42:19 -- scripts/common.sh@344 -- # : 1 00:14:46.281 12:42:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:46.281 12:42:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.281 12:42:19 -- scripts/common.sh@364 -- # decimal 1 00:14:46.281 12:42:19 -- scripts/common.sh@352 -- # local d=1 00:14:46.281 12:42:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.281 12:42:19 -- scripts/common.sh@354 -- # echo 1 00:14:46.281 12:42:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:46.281 12:42:19 -- scripts/common.sh@365 -- # decimal 2 00:14:46.281 12:42:19 -- scripts/common.sh@352 -- # local d=2 00:14:46.281 12:42:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.281 12:42:19 -- scripts/common.sh@354 -- # echo 2 00:14:46.281 12:42:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:46.281 12:42:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:46.281 12:42:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:46.281 12:42:19 -- scripts/common.sh@367 -- # return 0 00:14:46.281 12:42:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.281 12:42:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:46.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.281 --rc genhtml_branch_coverage=1 00:14:46.281 --rc genhtml_function_coverage=1 00:14:46.281 --rc genhtml_legend=1 00:14:46.281 --rc geninfo_all_blocks=1 00:14:46.281 --rc geninfo_unexecuted_blocks=1 00:14:46.281 00:14:46.281 ' 00:14:46.281 12:42:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.282 --rc genhtml_branch_coverage=1 00:14:46.282 --rc genhtml_function_coverage=1 00:14:46.282 --rc genhtml_legend=1 00:14:46.282 --rc geninfo_all_blocks=1 00:14:46.282 --rc geninfo_unexecuted_blocks=1 00:14:46.282 00:14:46.282 ' 00:14:46.282 12:42:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.282 --rc genhtml_branch_coverage=1 00:14:46.282 --rc genhtml_function_coverage=1 00:14:46.282 --rc genhtml_legend=1 00:14:46.282 --rc geninfo_all_blocks=1 00:14:46.282 --rc geninfo_unexecuted_blocks=1 00:14:46.282 00:14:46.282 ' 00:14:46.282 12:42:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.282 --rc genhtml_branch_coverage=1 00:14:46.282 --rc genhtml_function_coverage=1 00:14:46.282 --rc genhtml_legend=1 00:14:46.282 --rc geninfo_all_blocks=1 00:14:46.282 --rc geninfo_unexecuted_blocks=1 00:14:46.282 00:14:46.282 ' 00:14:46.282 12:42:19 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.282 12:42:19 -- nvmf/common.sh@7 -- # uname -s 00:14:46.282 12:42:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.282 12:42:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.282 12:42:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.282 12:42:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.282 12:42:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.282 12:42:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.282 12:42:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.282 12:42:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.282 12:42:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.282 12:42:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.282 12:42:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:46.282 12:42:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:46.282 12:42:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.282 12:42:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.282 12:42:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.282 12:42:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:46.282 12:42:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.282 12:42:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.282 12:42:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.282 12:42:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.282 12:42:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.282 12:42:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.282 12:42:19 -- paths/export.sh@5 -- # export PATH 00:14:46.282 12:42:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.282 12:42:19 -- nvmf/common.sh@46 -- # : 0 00:14:46.282 12:42:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:46.282 12:42:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:46.282 12:42:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:46.282 12:42:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.282 12:42:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.282 12:42:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:46.282 12:42:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:46.282 12:42:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:46.282 12:42:19 -- target/rpc.sh@11 -- # loops=5 00:14:46.282 12:42:19 -- target/rpc.sh@23 -- # nvmftestinit 00:14:46.282 12:42:19 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:46.282 12:42:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.282 12:42:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:46.282 12:42:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:46.282 12:42:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:46.282 12:42:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.282 12:42:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.282 12:42:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.282 12:42:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:46.282 12:42:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:46.282 12:42:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:46.282 12:42:19 -- common/autotest_common.sh@10 -- # set +x 00:14:54.435 12:42:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:54.435 12:42:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:54.435 12:42:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:54.435 12:42:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:54.435 12:42:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:54.435 12:42:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:54.435 12:42:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:54.435 12:42:26 -- nvmf/common.sh@294 -- # net_devs=() 00:14:54.435 12:42:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:54.435 12:42:26 -- nvmf/common.sh@295 -- # e810=() 00:14:54.435 12:42:26 -- nvmf/common.sh@295 -- # local -ga e810 00:14:54.435 12:42:26 -- nvmf/common.sh@296 -- # x722=() 00:14:54.435 12:42:26 -- nvmf/common.sh@296 -- # local -ga x722 00:14:54.435 12:42:26 -- nvmf/common.sh@297 -- # mlx=() 00:14:54.435 12:42:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:54.436 12:42:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.436 12:42:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:54.436 12:42:26 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:54.436 12:42:26 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:54.436 12:42:26 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:54.436 12:42:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:54.436 12:42:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:14:54.436 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:14:54.436 12:42:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:54.436 12:42:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:14:54.436 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:14:54.436 12:42:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:54.436 12:42:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:54.436 12:42:26 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.436 12:42:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:54.436 12:42:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.436 12:42:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:14:54.436 Found net devices under 0000:98:00.0: mlx_0_0 00:14:54.436 12:42:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.436 12:42:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.436 12:42:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:54.436 12:42:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.436 12:42:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:14:54.436 Found net devices under 0000:98:00.1: mlx_0_1 00:14:54.436 12:42:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.436 12:42:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:54.436 12:42:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:54.436 12:42:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:54.436 12:42:26 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:54.436 12:42:26 -- nvmf/common.sh@57 -- # uname 00:14:54.436 12:42:26 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:54.436 12:42:26 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:54.436 12:42:26 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:54.436 12:42:26 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:54.436 12:42:26 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:54.436 12:42:26 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:54.436 12:42:26 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:54.436 12:42:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:54.436 12:42:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:54.436 12:42:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:54.436 12:42:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:54.436 12:42:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:54.436 12:42:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:54.436 12:42:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:54.436 12:42:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:54.436 12:42:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:54.436 12:42:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:54.436 12:42:26 -- nvmf/common.sh@104 -- # continue 2 00:14:54.436 12:42:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:54.436 12:42:26 -- nvmf/common.sh@104 -- # continue 2 00:14:54.436 12:42:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:54.436 12:42:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:54.436 12:42:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:54.436 12:42:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:54.436 12:42:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:54.436 12:42:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:54.436 12:42:26 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:54.436 12:42:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:54.436 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:54.436 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:14:54.436 altname enp152s0f0np0 00:14:54.436 altname ens817f0np0 00:14:54.436 inet 192.168.100.8/24 scope global mlx_0_0 00:14:54.436 valid_lft forever preferred_lft forever 00:14:54.436 12:42:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:54.436 12:42:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:54.436 12:42:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:54.436 12:42:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:54.436 12:42:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:54.436 12:42:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:54.436 12:42:26 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:54.436 12:42:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:54.436 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:54.436 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:14:54.436 altname enp152s0f1np1 00:14:54.436 altname ens817f1np1 00:14:54.436 inet 192.168.100.9/24 scope global mlx_0_1 00:14:54.436 valid_lft forever preferred_lft forever 00:14:54.436 12:42:26 -- nvmf/common.sh@410 -- # return 0 00:14:54.436 12:42:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:54.436 12:42:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:54.436 12:42:26 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:54.436 12:42:26 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:54.436 12:42:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:54.436 12:42:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:54.436 12:42:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:54.436 12:42:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:54.436 12:42:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:54.436 12:42:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:54.436 12:42:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:54.436 12:42:26 -- nvmf/common.sh@104 -- # continue 2 00:14:54.436 12:42:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.436 12:42:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:54.437 12:42:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.437 12:42:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:54.437 12:42:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:54.437 12:42:26 -- nvmf/common.sh@104 -- # continue 2 00:14:54.437 12:42:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:54.437 12:42:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:54.437 12:42:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:54.437 12:42:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:54.437 12:42:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:54.437 12:42:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:54.437 12:42:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:54.437 12:42:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:54.437 12:42:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:54.437 12:42:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:54.437 12:42:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:54.437 12:42:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:54.437 12:42:26 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:54.437 192.168.100.9' 00:14:54.437 12:42:26 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:54.437 192.168.100.9' 00:14:54.437 12:42:26 -- nvmf/common.sh@445 -- # head -n 1 00:14:54.437 12:42:26 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:54.437 12:42:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:54.437 192.168.100.9' 00:14:54.437 12:42:26 -- nvmf/common.sh@446 -- # tail -n +2 00:14:54.437 12:42:26 -- nvmf/common.sh@446 -- # head -n 1 00:14:54.437 12:42:26 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:54.437 12:42:26 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:54.437 12:42:26 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:54.437 12:42:26 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:54.437 12:42:26 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:54.437 12:42:26 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:54.437 12:42:26 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:54.437 12:42:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:54.437 12:42:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.437 12:42:26 -- common/autotest_common.sh@10 -- # set +x 00:14:54.437 12:42:26 -- nvmf/common.sh@469 -- # nvmfpid=448187 00:14:54.437 12:42:26 -- nvmf/common.sh@470 -- # waitforlisten 448187 00:14:54.437 12:42:26 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.437 12:42:26 -- common/autotest_common.sh@829 -- # '[' -z 448187 ']' 00:14:54.437 12:42:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.437 12:42:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.437 12:42:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.437 12:42:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.437 12:42:26 -- common/autotest_common.sh@10 -- # set +x 00:14:54.437 [2024-11-20 12:42:26.497643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:54.437 [2024-11-20 12:42:26.497693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.437 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.437 [2024-11-20 12:42:26.559561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.437 [2024-11-20 12:42:26.623760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:54.437 [2024-11-20 12:42:26.623893] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.437 [2024-11-20 12:42:26.623903] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.437 [2024-11-20 12:42:26.623912] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.437 [2024-11-20 12:42:26.624048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.437 [2024-11-20 12:42:26.624318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.437 [2024-11-20 12:42:26.624468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.437 [2024-11-20 12:42:26.624468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.437 12:42:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.437 12:42:27 -- common/autotest_common.sh@862 -- # return 0 00:14:54.437 12:42:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:54.437 12:42:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.437 12:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.437 12:42:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.437 12:42:27 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:54.437 12:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.437 12:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.437 12:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.437 12:42:27 -- target/rpc.sh@26 -- # stats='{ 00:14:54.437 "tick_rate": 2400000000, 00:14:54.437 "poll_groups": [ 00:14:54.437 { 00:14:54.437 "name": "nvmf_tgt_poll_group_0", 00:14:54.437 "admin_qpairs": 0, 00:14:54.437 "io_qpairs": 0, 00:14:54.437 "current_admin_qpairs": 0, 00:14:54.437 "current_io_qpairs": 0, 00:14:54.437 "pending_bdev_io": 0, 00:14:54.437 "completed_nvme_io": 0, 00:14:54.437 "transports": [] 00:14:54.437 }, 00:14:54.437 { 00:14:54.437 "name": "nvmf_tgt_poll_group_1", 00:14:54.437 "admin_qpairs": 0, 00:14:54.437 "io_qpairs": 0, 00:14:54.437 "current_admin_qpairs": 0, 00:14:54.437 "current_io_qpairs": 0, 00:14:54.437 "pending_bdev_io": 0, 00:14:54.437 "completed_nvme_io": 0, 00:14:54.437 "transports": [] 00:14:54.437 }, 00:14:54.437 { 00:14:54.437 "name": "nvmf_tgt_poll_group_2", 00:14:54.437 "admin_qpairs": 0, 00:14:54.437 "io_qpairs": 0, 00:14:54.437 "current_admin_qpairs": 0, 00:14:54.437 "current_io_qpairs": 0, 00:14:54.437 "pending_bdev_io": 0, 00:14:54.437 "completed_nvme_io": 0, 00:14:54.437 "transports": [] 00:14:54.437 }, 00:14:54.437 { 00:14:54.437 "name": "nvmf_tgt_poll_group_3", 00:14:54.437 "admin_qpairs": 0, 00:14:54.437 "io_qpairs": 0, 00:14:54.437 "current_admin_qpairs": 0, 00:14:54.437 "current_io_qpairs": 0, 00:14:54.437 "pending_bdev_io": 0, 00:14:54.437 "completed_nvme_io": 0, 00:14:54.437 "transports": [] 00:14:54.437 } 00:14:54.437 ] 00:14:54.437 }' 00:14:54.437 12:42:27 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:54.437 12:42:27 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:54.437 12:42:27 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:54.437 12:42:27 -- target/rpc.sh@15 -- # wc -l 00:14:54.437 12:42:27 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:54.437 12:42:27 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:54.437 12:42:27 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:54.437 12:42:27 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:54.437 12:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.437 12:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.437 [2024-11-20 12:42:27.484380] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8667c0/0x86acb0) succeed. 00:14:54.437 [2024-11-20 12:42:27.498973] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x867db0/0x8ac350) succeed. 00:14:54.698 12:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.698 12:42:27 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:54.698 12:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.698 12:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.698 12:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.698 12:42:27 -- target/rpc.sh@33 -- # stats='{ 00:14:54.698 "tick_rate": 2400000000, 00:14:54.698 "poll_groups": [ 00:14:54.698 { 00:14:54.698 "name": "nvmf_tgt_poll_group_0", 00:14:54.698 "admin_qpairs": 0, 00:14:54.699 "io_qpairs": 0, 00:14:54.699 "current_admin_qpairs": 0, 00:14:54.699 "current_io_qpairs": 0, 00:14:54.699 "pending_bdev_io": 0, 00:14:54.699 "completed_nvme_io": 0, 00:14:54.699 "transports": [ 00:14:54.699 { 00:14:54.699 "trtype": "RDMA", 00:14:54.699 "pending_data_buffer": 0, 00:14:54.699 "devices": [ 00:14:54.699 { 00:14:54.699 "name": "mlx5_0", 00:14:54.699 "polls": 16065, 00:14:54.699 "idle_polls": 16065, 00:14:54.699 "completions": 0, 00:14:54.699 "requests": 0, 00:14:54.699 "request_latency": 0, 00:14:54.699 "pending_free_request": 0, 00:14:54.699 "pending_rdma_read": 0, 00:14:54.699 "pending_rdma_write": 0, 00:14:54.699 "pending_rdma_send": 0, 00:14:54.699 "total_send_wrs": 0, 00:14:54.699 "send_doorbell_updates": 0, 00:14:54.699 "total_recv_wrs": 4096, 00:14:54.699 "recv_doorbell_updates": 1 00:14:54.699 }, 00:14:54.699 { 00:14:54.699 "name": "mlx5_1", 00:14:54.699 "polls": 16065, 00:14:54.699 "idle_polls": 16065, 00:14:54.699 "completions": 0, 00:14:54.699 "requests": 0, 00:14:54.699 "request_latency": 0, 00:14:54.699 "pending_free_request": 0, 00:14:54.699 "pending_rdma_read": 0, 00:14:54.699 "pending_rdma_write": 0, 00:14:54.699 "pending_rdma_send": 0, 00:14:54.699 "total_send_wrs": 0, 00:14:54.699 "send_doorbell_updates": 0, 00:14:54.699 "total_recv_wrs": 4096, 00:14:54.699 "recv_doorbell_updates": 1 00:14:54.699 } 00:14:54.699 ] 00:14:54.699 } 00:14:54.699 ] 00:14:54.699 }, 00:14:54.699 { 00:14:54.699 "name": "nvmf_tgt_poll_group_1", 00:14:54.699 "admin_qpairs": 0, 00:14:54.699 "io_qpairs": 0, 00:14:54.699 "current_admin_qpairs": 0, 00:14:54.699 "current_io_qpairs": 0, 00:14:54.699 "pending_bdev_io": 0, 00:14:54.699 "completed_nvme_io": 0, 00:14:54.699 "transports": [ 00:14:54.699 { 00:14:54.699 "trtype": "RDMA", 00:14:54.699 "pending_data_buffer": 0, 00:14:54.699 "devices": [ 00:14:54.699 { 00:14:54.699 "name": "mlx5_0", 00:14:54.699 "polls": 15686, 00:14:54.699 "idle_polls": 15686, 00:14:54.699 "completions": 0, 00:14:54.699 "requests": 0, 00:14:54.699 "request_latency": 0, 00:14:54.699 "pending_free_request": 0, 00:14:54.699 "pending_rdma_read": 0, 00:14:54.699 "pending_rdma_write": 0, 00:14:54.699 "pending_rdma_send": 0, 00:14:54.699 "total_send_wrs": 0, 00:14:54.699 "send_doorbell_updates": 0, 00:14:54.699 "total_recv_wrs": 4096, 00:14:54.699 "recv_doorbell_updates": 1 00:14:54.699 }, 00:14:54.699 { 00:14:54.699 "name": "mlx5_1", 00:14:54.699 "polls": 15686, 00:14:54.699 "idle_polls": 15686, 00:14:54.699 "completions": 0, 00:14:54.699 "requests": 0, 00:14:54.699 "request_latency": 0, 00:14:54.699 "pending_free_request": 0, 00:14:54.699 "pending_rdma_read": 0, 00:14:54.699 "pending_rdma_write": 0, 00:14:54.699 "pending_rdma_send": 0, 00:14:54.699 "total_send_wrs": 0, 00:14:54.699 "send_doorbell_updates": 0, 00:14:54.699 "total_recv_wrs": 4096, 00:14:54.699 "recv_doorbell_updates": 1 00:14:54.699 } 00:14:54.699 ] 00:14:54.699 } 00:14:54.699 ] 00:14:54.699 }, 00:14:54.699 { 00:14:54.699 "name": "nvmf_tgt_poll_group_2", 00:14:54.699 "admin_qpairs": 0, 00:14:54.699 "io_qpairs": 0, 00:14:54.699 "current_admin_qpairs": 0, 00:14:54.699 "current_io_qpairs": 0, 00:14:54.699 "pending_bdev_io": 0, 00:14:54.699 "completed_nvme_io": 0, 00:14:54.699 "transports": [ 00:14:54.699 { 00:14:54.699 "trtype": "RDMA", 00:14:54.699 "pending_data_buffer": 0, 00:14:54.699 "devices": [ 00:14:54.699 { 00:14:54.699 "name": "mlx5_0", 00:14:54.699 "polls": 5716, 00:14:54.699 "idle_polls": 5716, 00:14:54.699 "completions": 0, 00:14:54.699 "requests": 0, 00:14:54.699 "request_latency": 0, 00:14:54.699 "pending_free_request": 0, 00:14:54.699 "pending_rdma_read": 0, 00:14:54.699 "pending_rdma_write": 0, 00:14:54.699 "pending_rdma_send": 0, 00:14:54.699 "total_send_wrs": 0, 00:14:54.699 "send_doorbell_updates": 0, 00:14:54.699 "total_recv_wrs": 4096, 00:14:54.699 "recv_doorbell_updates": 1 00:14:54.699 }, 00:14:54.699 { 00:14:54.699 "name": "mlx5_1", 00:14:54.699 "polls": 5716, 00:14:54.699 "idle_polls": 5716, 00:14:54.699 "completions": 0, 00:14:54.699 "requests": 0, 00:14:54.699 "request_latency": 0, 00:14:54.699 "pending_free_request": 0, 00:14:54.699 "pending_rdma_read": 0, 00:14:54.699 "pending_rdma_write": 0, 00:14:54.699 "pending_rdma_send": 0, 00:14:54.699 "total_send_wrs": 0, 00:14:54.699 "send_doorbell_updates": 0, 00:14:54.699 "total_recv_wrs": 4096, 00:14:54.699 "recv_doorbell_updates": 1 00:14:54.699 } 00:14:54.699 ] 00:14:54.699 } 00:14:54.699 ] 00:14:54.699 }, 00:14:54.699 { 00:14:54.699 "name": "nvmf_tgt_poll_group_3", 00:14:54.699 "admin_qpairs": 0, 00:14:54.699 "io_qpairs": 0, 00:14:54.699 "current_admin_qpairs": 0, 00:14:54.699 "current_io_qpairs": 0, 00:14:54.699 "pending_bdev_io": 0, 00:14:54.699 "completed_nvme_io": 0, 00:14:54.699 "transports": [ 00:14:54.699 { 00:14:54.699 "trtype": "RDMA", 00:14:54.699 "pending_data_buffer": 0, 00:14:54.699 "devices": [ 00:14:54.699 { 00:14:54.699 "name": "mlx5_0", 00:14:54.699 "polls": 884, 00:14:54.699 "idle_polls": 884, 00:14:54.699 "completions": 0, 00:14:54.699 "requests": 0, 00:14:54.699 "request_latency": 0, 00:14:54.699 "pending_free_request": 0, 00:14:54.699 "pending_rdma_read": 0, 00:14:54.699 "pending_rdma_write": 0, 00:14:54.699 "pending_rdma_send": 0, 00:14:54.699 "total_send_wrs": 0, 00:14:54.699 "send_doorbell_updates": 0, 00:14:54.699 "total_recv_wrs": 4096, 00:14:54.699 "recv_doorbell_updates": 1 00:14:54.699 }, 00:14:54.699 { 00:14:54.699 "name": "mlx5_1", 00:14:54.699 "polls": 884, 00:14:54.699 "idle_polls": 884, 00:14:54.699 "completions": 0, 00:14:54.699 "requests": 0, 00:14:54.699 "request_latency": 0, 00:14:54.699 "pending_free_request": 0, 00:14:54.699 "pending_rdma_read": 0, 00:14:54.699 "pending_rdma_write": 0, 00:14:54.699 "pending_rdma_send": 0, 00:14:54.699 "total_send_wrs": 0, 00:14:54.699 "send_doorbell_updates": 0, 00:14:54.699 "total_recv_wrs": 4096, 00:14:54.699 "recv_doorbell_updates": 1 00:14:54.699 } 00:14:54.699 ] 00:14:54.699 } 00:14:54.699 ] 00:14:54.699 } 00:14:54.699 ] 00:14:54.699 }' 00:14:54.699 12:42:27 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:54.699 12:42:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:54.699 12:42:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:54.699 12:42:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:54.699 12:42:27 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:54.699 12:42:27 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:54.699 12:42:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:54.699 12:42:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:54.699 12:42:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:54.699 12:42:27 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:54.699 12:42:27 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:54.699 12:42:27 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:54.699 12:42:27 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:54.699 12:42:27 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:54.699 12:42:27 -- target/rpc.sh@15 -- # wc -l 00:14:54.961 12:42:27 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:54.961 12:42:27 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:54.961 12:42:27 -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:54.961 12:42:27 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:54.961 12:42:27 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:54.961 12:42:27 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:54.961 12:42:27 -- target/rpc.sh@15 -- # wc -l 00:14:54.962 12:42:27 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:54.962 12:42:27 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:54.962 12:42:27 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:54.962 12:42:27 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:54.962 12:42:27 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:54.962 12:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.962 12:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.962 Malloc1 00:14:54.962 12:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.962 12:42:27 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:54.962 12:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.962 12:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.962 12:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.962 12:42:27 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:54.962 12:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.962 12:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.962 12:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.962 12:42:27 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:54.962 12:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.962 12:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.962 12:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.962 12:42:27 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:54.962 12:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.962 12:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.962 [2024-11-20 12:42:27.973072] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:54.962 12:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.962 12:42:27 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 192.168.100.8 -s 4420 00:14:54.962 12:42:27 -- common/autotest_common.sh@650 -- # local es=0 00:14:54.962 12:42:27 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 192.168.100.8 -s 4420 00:14:54.962 12:42:27 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:54.962 12:42:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.962 12:42:27 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:54.962 12:42:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.962 12:42:27 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:54.962 12:42:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.962 12:42:27 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:54.962 12:42:27 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:54.962 12:42:27 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 192.168.100.8 -s 4420 00:14:54.962 [2024-11-20 12:42:28.028626] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:14:55.223 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:55.223 could not add new controller: failed to write to nvme-fabrics device 00:14:55.223 12:42:28 -- common/autotest_common.sh@653 -- # es=1 00:14:55.223 12:42:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:55.223 12:42:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:55.223 12:42:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:55.223 12:42:28 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:55.223 12:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.223 12:42:28 -- common/autotest_common.sh@10 -- # set +x 00:14:55.223 12:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.223 12:42:28 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:56.610 12:42:29 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:56.610 12:42:29 -- common/autotest_common.sh@1187 -- # local i=0 00:14:56.610 12:42:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.610 12:42:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:56.610 12:42:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:58.527 12:42:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:58.528 12:42:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:58.528 12:42:31 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.528 12:42:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:58.528 12:42:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.528 12:42:31 -- common/autotest_common.sh@1197 -- # return 0 00:14:58.528 12:42:31 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.915 12:42:32 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:59.915 12:42:32 -- common/autotest_common.sh@1208 -- # local i=0 00:14:59.915 12:42:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:59.915 12:42:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.915 12:42:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:59.915 12:42:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.915 12:42:32 -- common/autotest_common.sh@1220 -- # return 0 00:14:59.915 12:42:32 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:59.915 12:42:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.915 12:42:32 -- common/autotest_common.sh@10 -- # set +x 00:14:59.915 12:42:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.915 12:42:32 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:59.915 12:42:32 -- common/autotest_common.sh@650 -- # local es=0 00:14:59.915 12:42:32 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:59.915 12:42:32 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:59.915 12:42:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.915 12:42:32 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:59.915 12:42:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.915 12:42:32 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:59.915 12:42:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.915 12:42:32 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:59.915 12:42:32 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:59.915 12:42:32 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:59.915 [2024-11-20 12:42:32.802272] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:14:59.915 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:59.915 could not add new controller: failed to write to nvme-fabrics device 00:14:59.915 12:42:32 -- common/autotest_common.sh@653 -- # es=1 00:14:59.915 12:42:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:59.915 12:42:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:59.915 12:42:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:59.915 12:42:32 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:59.915 12:42:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.915 12:42:32 -- common/autotest_common.sh@10 -- # set +x 00:14:59.915 12:42:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.916 12:42:32 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:01.303 12:42:34 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:01.303 12:42:34 -- common/autotest_common.sh@1187 -- # local i=0 00:15:01.303 12:42:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.303 12:42:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:01.303 12:42:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:03.218 12:42:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:03.218 12:42:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:03.218 12:42:36 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.479 12:42:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:03.479 12:42:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.479 12:42:36 -- common/autotest_common.sh@1197 -- # return 0 00:15:03.479 12:42:36 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:04.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.866 12:42:37 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:04.866 12:42:37 -- common/autotest_common.sh@1208 -- # local i=0 00:15:04.866 12:42:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:04.866 12:42:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:04.866 12:42:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:04.866 12:42:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:04.866 12:42:37 -- common/autotest_common.sh@1220 -- # return 0 00:15:04.866 12:42:37 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.866 12:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.866 12:42:37 -- common/autotest_common.sh@10 -- # set +x 00:15:04.866 12:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.866 12:42:37 -- target/rpc.sh@81 -- # seq 1 5 00:15:04.866 12:42:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:04.866 12:42:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:04.866 12:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.866 12:42:37 -- common/autotest_common.sh@10 -- # set +x 00:15:04.866 12:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.866 12:42:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:04.866 12:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.866 12:42:37 -- common/autotest_common.sh@10 -- # set +x 00:15:04.866 [2024-11-20 12:42:37.713818] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:04.866 12:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.866 12:42:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:04.866 12:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.866 12:42:37 -- common/autotest_common.sh@10 -- # set +x 00:15:04.866 12:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.866 12:42:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:04.866 12:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.866 12:42:37 -- common/autotest_common.sh@10 -- # set +x 00:15:04.866 12:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.866 12:42:37 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:06.251 12:42:39 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.251 12:42:39 -- common/autotest_common.sh@1187 -- # local i=0 00:15:06.251 12:42:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.251 12:42:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:06.251 12:42:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:08.166 12:42:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:08.166 12:42:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:08.166 12:42:41 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.166 12:42:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:08.166 12:42:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.166 12:42:41 -- common/autotest_common.sh@1197 -- # return 0 00:15:08.166 12:42:41 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.553 12:42:42 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.553 12:42:42 -- common/autotest_common.sh@1208 -- # local i=0 00:15:09.553 12:42:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:09.553 12:42:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.553 12:42:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:09.553 12:42:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.553 12:42:42 -- common/autotest_common.sh@1220 -- # return 0 00:15:09.553 12:42:42 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:09.553 12:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.554 12:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.554 12:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.554 12:42:42 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.554 12:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.554 12:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.554 12:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.554 12:42:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:09.554 12:42:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.554 12:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.554 12:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.554 12:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.554 12:42:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:09.554 12:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.554 12:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.554 [2024-11-20 12:42:42.557245] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:09.554 12:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.554 12:42:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:09.554 12:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.554 12:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.554 12:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.554 12:42:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.554 12:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.554 12:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.554 12:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.554 12:42:42 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:11.471 12:42:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:11.471 12:42:44 -- common/autotest_common.sh@1187 -- # local i=0 00:15:11.471 12:42:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.471 12:42:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:11.471 12:42:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:13.386 12:42:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:13.386 12:42:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:13.386 12:42:46 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.386 12:42:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:13.386 12:42:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.386 12:42:46 -- common/autotest_common.sh@1197 -- # return 0 00:15:13.386 12:42:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.329 12:42:47 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.329 12:42:47 -- common/autotest_common.sh@1208 -- # local i=0 00:15:14.329 12:42:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:14.329 12:42:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.329 12:42:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:14.329 12:42:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.329 12:42:47 -- common/autotest_common.sh@1220 -- # return 0 00:15:14.329 12:42:47 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.329 12:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.329 12:42:47 -- common/autotest_common.sh@10 -- # set +x 00:15:14.329 12:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.329 12:42:47 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.329 12:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.329 12:42:47 -- common/autotest_common.sh@10 -- # set +x 00:15:14.329 12:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.329 12:42:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:14.329 12:42:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:14.329 12:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.329 12:42:47 -- common/autotest_common.sh@10 -- # set +x 00:15:14.329 12:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.329 12:42:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:14.329 12:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.329 12:42:47 -- common/autotest_common.sh@10 -- # set +x 00:15:14.329 [2024-11-20 12:42:47.429425] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:14.329 12:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.329 12:42:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:14.329 12:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.329 12:42:47 -- common/autotest_common.sh@10 -- # set +x 00:15:14.590 12:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.590 12:42:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:14.590 12:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.590 12:42:47 -- common/autotest_common.sh@10 -- # set +x 00:15:14.590 12:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.590 12:42:47 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:15.977 12:42:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.977 12:42:48 -- common/autotest_common.sh@1187 -- # local i=0 00:15:15.977 12:42:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.977 12:42:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:15.977 12:42:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:17.891 12:42:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:17.891 12:42:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:17.891 12:42:50 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.891 12:42:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:17.891 12:42:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.891 12:42:50 -- common/autotest_common.sh@1197 -- # return 0 00:15:17.891 12:42:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.278 12:42:52 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.278 12:42:52 -- common/autotest_common.sh@1208 -- # local i=0 00:15:19.278 12:42:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:19.278 12:42:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.278 12:42:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:19.278 12:42:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.278 12:42:52 -- common/autotest_common.sh@1220 -- # return 0 00:15:19.278 12:42:52 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.278 12:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.278 12:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:19.278 12:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.278 12:42:52 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.278 12:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.278 12:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:19.278 12:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.278 12:42:52 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:19.278 12:42:52 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:19.278 12:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.278 12:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:19.278 12:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.278 12:42:52 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:19.278 12:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.278 12:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:19.278 [2024-11-20 12:42:52.269158] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:19.278 12:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.278 12:42:52 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:19.278 12:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.278 12:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:19.278 12:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.278 12:42:52 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:19.278 12:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.278 12:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:19.278 12:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.278 12:42:52 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:20.663 12:42:53 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:20.663 12:42:53 -- common/autotest_common.sh@1187 -- # local i=0 00:15:20.663 12:42:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.663 12:42:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:20.663 12:42:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:23.214 12:42:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:23.214 12:42:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:23.214 12:42:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:23.214 12:42:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:23.214 12:42:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:23.214 12:42:55 -- common/autotest_common.sh@1197 -- # return 0 00:15:23.214 12:42:55 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.158 12:42:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:24.158 12:42:57 -- common/autotest_common.sh@1208 -- # local i=0 00:15:24.158 12:42:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:24.158 12:42:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.158 12:42:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:24.158 12:42:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.158 12:42:57 -- common/autotest_common.sh@1220 -- # return 0 00:15:24.158 12:42:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:24.158 12:42:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.158 12:42:57 -- common/autotest_common.sh@10 -- # set +x 00:15:24.158 12:42:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.158 12:42:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.158 12:42:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.158 12:42:57 -- common/autotest_common.sh@10 -- # set +x 00:15:24.158 12:42:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.158 12:42:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:24.158 12:42:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.158 12:42:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.158 12:42:57 -- common/autotest_common.sh@10 -- # set +x 00:15:24.159 12:42:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.159 12:42:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:24.159 12:42:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.159 12:42:57 -- common/autotest_common.sh@10 -- # set +x 00:15:24.159 [2024-11-20 12:42:57.123075] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:24.159 12:42:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.159 12:42:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:24.159 12:42:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.159 12:42:57 -- common/autotest_common.sh@10 -- # set +x 00:15:24.159 12:42:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.159 12:42:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:24.159 12:42:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.159 12:42:57 -- common/autotest_common.sh@10 -- # set +x 00:15:24.159 12:42:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.159 12:42:57 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:25.551 12:42:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:25.551 12:42:58 -- common/autotest_common.sh@1187 -- # local i=0 00:15:25.551 12:42:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.551 12:42:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:25.551 12:42:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:28.125 12:43:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:28.125 12:43:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:28.125 12:43:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:28.125 12:43:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:28.125 12:43:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.125 12:43:00 -- common/autotest_common.sh@1197 -- # return 0 00:15:28.125 12:43:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.070 12:43:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.070 12:43:01 -- common/autotest_common.sh@1208 -- # local i=0 00:15:29.070 12:43:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:29.070 12:43:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.070 12:43:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:29.070 12:43:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.070 12:43:01 -- common/autotest_common.sh@1220 -- # return 0 00:15:29.070 12:43:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.070 12:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.070 12:43:01 -- common/autotest_common.sh@10 -- # set +x 00:15:29.070 12:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.070 12:43:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.070 12:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.070 12:43:01 -- common/autotest_common.sh@10 -- # set +x 00:15:29.070 12:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.070 12:43:02 -- target/rpc.sh@99 -- # seq 1 5 00:15:29.070 12:43:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:29.070 12:43:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.070 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.070 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.070 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.070 12:43:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:29.070 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.070 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.070 [2024-11-20 12:43:02.023430] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:29.070 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.070 12:43:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.070 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.070 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.070 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.070 12:43:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.070 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:29.071 12:43:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 [2024-11-20 12:43:02.079594] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:29.071 12:43:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 [2024-11-20 12:43:02.135795] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.071 12:43:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.071 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.071 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.071 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.333 12:43:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:29.333 12:43:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.333 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.333 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.333 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.333 12:43:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:29.333 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.333 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.333 [2024-11-20 12:43:02.195995] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:29.333 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.333 12:43:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.333 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.333 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.333 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.333 12:43:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.333 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.333 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.333 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.333 12:43:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.333 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.333 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.333 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.333 12:43:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.333 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.333 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.333 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.333 12:43:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:29.333 12:43:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.333 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.333 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.333 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.333 12:43:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:29.333 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.333 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.333 [2024-11-20 12:43:02.252219] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:29.333 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.333 12:43:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.333 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.333 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.333 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.333 12:43:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.333 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.333 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.334 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.334 12:43:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.334 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.334 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.334 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.334 12:43:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.334 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.334 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.334 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.334 12:43:02 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:29.334 12:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.334 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.334 12:43:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.334 12:43:02 -- target/rpc.sh@110 -- # stats='{ 00:15:29.334 "tick_rate": 2400000000, 00:15:29.334 "poll_groups": [ 00:15:29.334 { 00:15:29.334 "name": "nvmf_tgt_poll_group_0", 00:15:29.334 "admin_qpairs": 2, 00:15:29.334 "io_qpairs": 27, 00:15:29.334 "current_admin_qpairs": 0, 00:15:29.334 "current_io_qpairs": 0, 00:15:29.334 "pending_bdev_io": 0, 00:15:29.334 "completed_nvme_io": 127, 00:15:29.334 "transports": [ 00:15:29.334 { 00:15:29.334 "trtype": "RDMA", 00:15:29.334 "pending_data_buffer": 0, 00:15:29.334 "devices": [ 00:15:29.334 { 00:15:29.334 "name": "mlx5_0", 00:15:29.334 "polls": 4819815, 00:15:29.334 "idle_polls": 4819491, 00:15:29.334 "completions": 363, 00:15:29.334 "requests": 181, 00:15:29.334 "request_latency": 29711280, 00:15:29.334 "pending_free_request": 0, 00:15:29.334 "pending_rdma_read": 0, 00:15:29.334 "pending_rdma_write": 0, 00:15:29.334 "pending_rdma_send": 0, 00:15:29.334 "total_send_wrs": 307, 00:15:29.334 "send_doorbell_updates": 158, 00:15:29.334 "total_recv_wrs": 4277, 00:15:29.334 "recv_doorbell_updates": 158 00:15:29.334 }, 00:15:29.334 { 00:15:29.334 "name": "mlx5_1", 00:15:29.334 "polls": 4819815, 00:15:29.334 "idle_polls": 4819815, 00:15:29.334 "completions": 0, 00:15:29.334 "requests": 0, 00:15:29.334 "request_latency": 0, 00:15:29.334 "pending_free_request": 0, 00:15:29.334 "pending_rdma_read": 0, 00:15:29.334 "pending_rdma_write": 0, 00:15:29.334 "pending_rdma_send": 0, 00:15:29.334 "total_send_wrs": 0, 00:15:29.334 "send_doorbell_updates": 0, 00:15:29.334 "total_recv_wrs": 4096, 00:15:29.334 "recv_doorbell_updates": 1 00:15:29.334 } 00:15:29.334 ] 00:15:29.334 } 00:15:29.334 ] 00:15:29.334 }, 00:15:29.334 { 00:15:29.334 "name": "nvmf_tgt_poll_group_1", 00:15:29.334 "admin_qpairs": 2, 00:15:29.334 "io_qpairs": 26, 00:15:29.334 "current_admin_qpairs": 0, 00:15:29.334 "current_io_qpairs": 0, 00:15:29.334 "pending_bdev_io": 0, 00:15:29.334 "completed_nvme_io": 125, 00:15:29.334 "transports": [ 00:15:29.334 { 00:15:29.334 "trtype": "RDMA", 00:15:29.334 "pending_data_buffer": 0, 00:15:29.334 "devices": [ 00:15:29.334 { 00:15:29.334 "name": "mlx5_0", 00:15:29.334 "polls": 4817756, 00:15:29.334 "idle_polls": 4817440, 00:15:29.334 "completions": 356, 00:15:29.334 "requests": 178, 00:15:29.334 "request_latency": 29314388, 00:15:29.334 "pending_free_request": 0, 00:15:29.334 "pending_rdma_read": 0, 00:15:29.334 "pending_rdma_write": 0, 00:15:29.334 "pending_rdma_send": 0, 00:15:29.334 "total_send_wrs": 302, 00:15:29.334 "send_doorbell_updates": 154, 00:15:29.334 "total_recv_wrs": 4274, 00:15:29.334 "recv_doorbell_updates": 155 00:15:29.334 }, 00:15:29.334 { 00:15:29.334 "name": "mlx5_1", 00:15:29.334 "polls": 4817756, 00:15:29.334 "idle_polls": 4817756, 00:15:29.334 "completions": 0, 00:15:29.334 "requests": 0, 00:15:29.334 "request_latency": 0, 00:15:29.334 "pending_free_request": 0, 00:15:29.334 "pending_rdma_read": 0, 00:15:29.334 "pending_rdma_write": 0, 00:15:29.334 "pending_rdma_send": 0, 00:15:29.334 "total_send_wrs": 0, 00:15:29.334 "send_doorbell_updates": 0, 00:15:29.334 "total_recv_wrs": 4096, 00:15:29.334 "recv_doorbell_updates": 1 00:15:29.334 } 00:15:29.334 ] 00:15:29.334 } 00:15:29.334 ] 00:15:29.334 }, 00:15:29.334 { 00:15:29.334 "name": "nvmf_tgt_poll_group_2", 00:15:29.334 "admin_qpairs": 1, 00:15:29.334 "io_qpairs": 26, 00:15:29.334 "current_admin_qpairs": 0, 00:15:29.334 "current_io_qpairs": 0, 00:15:29.334 "pending_bdev_io": 0, 00:15:29.334 "completed_nvme_io": 77, 00:15:29.334 "transports": [ 00:15:29.334 { 00:15:29.334 "trtype": "RDMA", 00:15:29.334 "pending_data_buffer": 0, 00:15:29.334 "devices": [ 00:15:29.334 { 00:15:29.334 "name": "mlx5_0", 00:15:29.334 "polls": 4774426, 00:15:29.334 "idle_polls": 4774236, 00:15:29.334 "completions": 211, 00:15:29.334 "requests": 105, 00:15:29.334 "request_latency": 15963088, 00:15:29.334 "pending_free_request": 0, 00:15:29.334 "pending_rdma_read": 0, 00:15:29.334 "pending_rdma_write": 0, 00:15:29.334 "pending_rdma_send": 0, 00:15:29.334 "total_send_wrs": 170, 00:15:29.334 "send_doorbell_updates": 94, 00:15:29.334 "total_recv_wrs": 4201, 00:15:29.334 "recv_doorbell_updates": 94 00:15:29.334 }, 00:15:29.334 { 00:15:29.334 "name": "mlx5_1", 00:15:29.334 "polls": 4774426, 00:15:29.334 "idle_polls": 4774426, 00:15:29.334 "completions": 0, 00:15:29.334 "requests": 0, 00:15:29.334 "request_latency": 0, 00:15:29.334 "pending_free_request": 0, 00:15:29.334 "pending_rdma_read": 0, 00:15:29.334 "pending_rdma_write": 0, 00:15:29.334 "pending_rdma_send": 0, 00:15:29.334 "total_send_wrs": 0, 00:15:29.334 "send_doorbell_updates": 0, 00:15:29.334 "total_recv_wrs": 4096, 00:15:29.334 "recv_doorbell_updates": 1 00:15:29.334 } 00:15:29.334 ] 00:15:29.334 } 00:15:29.334 ] 00:15:29.334 }, 00:15:29.334 { 00:15:29.334 "name": "nvmf_tgt_poll_group_3", 00:15:29.334 "admin_qpairs": 2, 00:15:29.334 "io_qpairs": 26, 00:15:29.334 "current_admin_qpairs": 0, 00:15:29.334 "current_io_qpairs": 0, 00:15:29.334 "pending_bdev_io": 0, 00:15:29.334 "completed_nvme_io": 126, 00:15:29.334 "transports": [ 00:15:29.334 { 00:15:29.334 "trtype": "RDMA", 00:15:29.334 "pending_data_buffer": 0, 00:15:29.334 "devices": [ 00:15:29.334 { 00:15:29.334 "name": "mlx5_0", 00:15:29.334 "polls": 3336680, 00:15:29.334 "idle_polls": 3336361, 00:15:29.334 "completions": 360, 00:15:29.334 "requests": 180, 00:15:29.334 "request_latency": 39267930, 00:15:29.334 "pending_free_request": 0, 00:15:29.334 "pending_rdma_read": 0, 00:15:29.334 "pending_rdma_write": 0, 00:15:29.334 "pending_rdma_send": 0, 00:15:29.334 "total_send_wrs": 306, 00:15:29.334 "send_doorbell_updates": 155, 00:15:29.334 "total_recv_wrs": 4276, 00:15:29.334 "recv_doorbell_updates": 156 00:15:29.334 }, 00:15:29.334 { 00:15:29.334 "name": "mlx5_1", 00:15:29.334 "polls": 3336680, 00:15:29.334 "idle_polls": 3336680, 00:15:29.334 "completions": 0, 00:15:29.334 "requests": 0, 00:15:29.334 "request_latency": 0, 00:15:29.334 "pending_free_request": 0, 00:15:29.334 "pending_rdma_read": 0, 00:15:29.334 "pending_rdma_write": 0, 00:15:29.334 "pending_rdma_send": 0, 00:15:29.334 "total_send_wrs": 0, 00:15:29.334 "send_doorbell_updates": 0, 00:15:29.334 "total_recv_wrs": 4096, 00:15:29.334 "recv_doorbell_updates": 1 00:15:29.334 } 00:15:29.334 ] 00:15:29.334 } 00:15:29.334 ] 00:15:29.334 } 00:15:29.334 ] 00:15:29.334 }' 00:15:29.334 12:43:02 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:29.334 12:43:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:29.334 12:43:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:29.334 12:43:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:29.334 12:43:02 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:29.334 12:43:02 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:29.334 12:43:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:29.334 12:43:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:29.334 12:43:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:29.334 12:43:02 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:15:29.334 12:43:02 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:15:29.334 12:43:02 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:15:29.334 12:43:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:15:29.334 12:43:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:15:29.334 12:43:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:29.596 12:43:02 -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:15:29.596 12:43:02 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:15:29.596 12:43:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:15:29.596 12:43:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:29.596 12:43:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:15:29.596 12:43:02 -- target/rpc.sh@118 -- # (( 114256686 > 0 )) 00:15:29.596 12:43:02 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:29.596 12:43:02 -- target/rpc.sh@123 -- # nvmftestfini 00:15:29.596 12:43:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:29.596 12:43:02 -- nvmf/common.sh@116 -- # sync 00:15:29.596 12:43:02 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:29.596 12:43:02 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:29.596 12:43:02 -- nvmf/common.sh@119 -- # set +e 00:15:29.596 12:43:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:29.596 12:43:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:29.596 rmmod nvme_rdma 00:15:29.596 rmmod nvme_fabrics 00:15:29.596 12:43:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:29.596 12:43:02 -- nvmf/common.sh@123 -- # set -e 00:15:29.596 12:43:02 -- nvmf/common.sh@124 -- # return 0 00:15:29.596 12:43:02 -- nvmf/common.sh@477 -- # '[' -n 448187 ']' 00:15:29.596 12:43:02 -- nvmf/common.sh@478 -- # killprocess 448187 00:15:29.596 12:43:02 -- common/autotest_common.sh@936 -- # '[' -z 448187 ']' 00:15:29.596 12:43:02 -- common/autotest_common.sh@940 -- # kill -0 448187 00:15:29.596 12:43:02 -- common/autotest_common.sh@941 -- # uname 00:15:29.596 12:43:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.596 12:43:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 448187 00:15:29.596 12:43:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:29.596 12:43:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:29.596 12:43:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 448187' 00:15:29.596 killing process with pid 448187 00:15:29.596 12:43:02 -- common/autotest_common.sh@955 -- # kill 448187 00:15:29.596 12:43:02 -- common/autotest_common.sh@960 -- # wait 448187 00:15:29.858 12:43:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:29.858 12:43:02 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:29.858 00:15:29.858 real 0m43.895s 00:15:29.858 user 2m26.820s 00:15:29.858 sys 0m7.229s 00:15:29.858 12:43:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:29.858 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.858 ************************************ 00:15:29.858 END TEST nvmf_rpc 00:15:29.858 ************************************ 00:15:29.858 12:43:02 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:29.858 12:43:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:29.858 12:43:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.858 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.858 ************************************ 00:15:29.858 START TEST nvmf_invalid 00:15:29.858 ************************************ 00:15:29.858 12:43:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:30.120 * Looking for test storage... 00:15:30.120 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:30.120 12:43:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:30.120 12:43:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:30.120 12:43:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:30.120 12:43:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:30.120 12:43:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:30.120 12:43:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:30.120 12:43:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:30.120 12:43:03 -- scripts/common.sh@335 -- # IFS=.-: 00:15:30.120 12:43:03 -- scripts/common.sh@335 -- # read -ra ver1 00:15:30.120 12:43:03 -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.120 12:43:03 -- scripts/common.sh@336 -- # read -ra ver2 00:15:30.120 12:43:03 -- scripts/common.sh@337 -- # local 'op=<' 00:15:30.120 12:43:03 -- scripts/common.sh@339 -- # ver1_l=2 00:15:30.120 12:43:03 -- scripts/common.sh@340 -- # ver2_l=1 00:15:30.120 12:43:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:30.120 12:43:03 -- scripts/common.sh@343 -- # case "$op" in 00:15:30.120 12:43:03 -- scripts/common.sh@344 -- # : 1 00:15:30.120 12:43:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:30.120 12:43:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.120 12:43:03 -- scripts/common.sh@364 -- # decimal 1 00:15:30.120 12:43:03 -- scripts/common.sh@352 -- # local d=1 00:15:30.120 12:43:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.120 12:43:03 -- scripts/common.sh@354 -- # echo 1 00:15:30.120 12:43:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:30.120 12:43:03 -- scripts/common.sh@365 -- # decimal 2 00:15:30.120 12:43:03 -- scripts/common.sh@352 -- # local d=2 00:15:30.120 12:43:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.120 12:43:03 -- scripts/common.sh@354 -- # echo 2 00:15:30.120 12:43:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:30.120 12:43:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:30.121 12:43:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:30.121 12:43:03 -- scripts/common.sh@367 -- # return 0 00:15:30.121 12:43:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.121 12:43:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:30.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.121 --rc genhtml_branch_coverage=1 00:15:30.121 --rc genhtml_function_coverage=1 00:15:30.121 --rc genhtml_legend=1 00:15:30.121 --rc geninfo_all_blocks=1 00:15:30.121 --rc geninfo_unexecuted_blocks=1 00:15:30.121 00:15:30.121 ' 00:15:30.121 12:43:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:30.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.121 --rc genhtml_branch_coverage=1 00:15:30.121 --rc genhtml_function_coverage=1 00:15:30.121 --rc genhtml_legend=1 00:15:30.121 --rc geninfo_all_blocks=1 00:15:30.121 --rc geninfo_unexecuted_blocks=1 00:15:30.121 00:15:30.121 ' 00:15:30.121 12:43:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:30.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.121 --rc genhtml_branch_coverage=1 00:15:30.121 --rc genhtml_function_coverage=1 00:15:30.121 --rc genhtml_legend=1 00:15:30.121 --rc geninfo_all_blocks=1 00:15:30.121 --rc geninfo_unexecuted_blocks=1 00:15:30.121 00:15:30.121 ' 00:15:30.121 12:43:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:30.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.121 --rc genhtml_branch_coverage=1 00:15:30.121 --rc genhtml_function_coverage=1 00:15:30.121 --rc genhtml_legend=1 00:15:30.121 --rc geninfo_all_blocks=1 00:15:30.121 --rc geninfo_unexecuted_blocks=1 00:15:30.121 00:15:30.121 ' 00:15:30.121 12:43:03 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.121 12:43:03 -- nvmf/common.sh@7 -- # uname -s 00:15:30.121 12:43:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.121 12:43:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.121 12:43:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.121 12:43:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.121 12:43:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.121 12:43:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.121 12:43:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.121 12:43:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.121 12:43:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.121 12:43:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.121 12:43:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:30.121 12:43:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:30.121 12:43:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.121 12:43:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.121 12:43:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.121 12:43:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:30.121 12:43:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.121 12:43:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.121 12:43:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.121 12:43:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.121 12:43:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.121 12:43:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.121 12:43:03 -- paths/export.sh@5 -- # export PATH 00:15:30.121 12:43:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.121 12:43:03 -- nvmf/common.sh@46 -- # : 0 00:15:30.121 12:43:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:30.121 12:43:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:30.121 12:43:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:30.121 12:43:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.121 12:43:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.121 12:43:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:30.121 12:43:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:30.121 12:43:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:30.121 12:43:03 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:30.121 12:43:03 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:30.121 12:43:03 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:30.121 12:43:03 -- target/invalid.sh@14 -- # target=foobar 00:15:30.121 12:43:03 -- target/invalid.sh@16 -- # RANDOM=0 00:15:30.121 12:43:03 -- target/invalid.sh@34 -- # nvmftestinit 00:15:30.121 12:43:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:30.121 12:43:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.121 12:43:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:30.121 12:43:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:30.121 12:43:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:30.121 12:43:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.121 12:43:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.121 12:43:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.121 12:43:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:30.121 12:43:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:30.121 12:43:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:30.121 12:43:03 -- common/autotest_common.sh@10 -- # set +x 00:15:38.289 12:43:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:38.289 12:43:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:38.289 12:43:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:38.289 12:43:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:38.289 12:43:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:38.289 12:43:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:38.289 12:43:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:38.289 12:43:10 -- nvmf/common.sh@294 -- # net_devs=() 00:15:38.289 12:43:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:38.289 12:43:10 -- nvmf/common.sh@295 -- # e810=() 00:15:38.289 12:43:10 -- nvmf/common.sh@295 -- # local -ga e810 00:15:38.289 12:43:10 -- nvmf/common.sh@296 -- # x722=() 00:15:38.289 12:43:10 -- nvmf/common.sh@296 -- # local -ga x722 00:15:38.289 12:43:10 -- nvmf/common.sh@297 -- # mlx=() 00:15:38.289 12:43:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:38.289 12:43:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.289 12:43:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:38.289 12:43:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:38.289 12:43:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:38.289 12:43:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:38.289 12:43:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:38.289 12:43:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:38.289 12:43:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:38.289 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:38.289 12:43:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:38.289 12:43:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:38.289 12:43:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:38.289 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:38.289 12:43:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:38.289 12:43:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:38.289 12:43:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:38.289 12:43:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.289 12:43:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:38.289 12:43:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.289 12:43:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:38.289 Found net devices under 0000:98:00.0: mlx_0_0 00:15:38.289 12:43:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.289 12:43:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:38.289 12:43:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.289 12:43:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:38.289 12:43:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.289 12:43:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:38.289 Found net devices under 0000:98:00.1: mlx_0_1 00:15:38.289 12:43:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.289 12:43:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:38.289 12:43:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:38.289 12:43:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:38.289 12:43:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:38.289 12:43:10 -- nvmf/common.sh@57 -- # uname 00:15:38.289 12:43:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:38.289 12:43:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:38.289 12:43:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:38.289 12:43:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:38.289 12:43:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:38.289 12:43:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:38.289 12:43:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:38.289 12:43:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:38.289 12:43:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:38.289 12:43:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:38.289 12:43:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:38.289 12:43:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:38.289 12:43:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:38.289 12:43:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:38.289 12:43:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:38.289 12:43:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:38.289 12:43:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:38.289 12:43:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.289 12:43:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:38.289 12:43:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:38.289 12:43:10 -- nvmf/common.sh@104 -- # continue 2 00:15:38.289 12:43:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:38.289 12:43:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.290 12:43:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:38.290 12:43:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.290 12:43:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:38.290 12:43:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:38.290 12:43:10 -- nvmf/common.sh@104 -- # continue 2 00:15:38.290 12:43:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:38.290 12:43:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:38.290 12:43:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:38.290 12:43:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:38.290 12:43:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:38.290 12:43:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:38.290 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:38.290 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:15:38.290 altname enp152s0f0np0 00:15:38.290 altname ens817f0np0 00:15:38.290 inet 192.168.100.8/24 scope global mlx_0_0 00:15:38.290 valid_lft forever preferred_lft forever 00:15:38.290 12:43:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:38.290 12:43:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:38.290 12:43:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:38.290 12:43:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:38.290 12:43:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:38.290 12:43:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:38.290 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:38.290 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:15:38.290 altname enp152s0f1np1 00:15:38.290 altname ens817f1np1 00:15:38.290 inet 192.168.100.9/24 scope global mlx_0_1 00:15:38.290 valid_lft forever preferred_lft forever 00:15:38.290 12:43:10 -- nvmf/common.sh@410 -- # return 0 00:15:38.290 12:43:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:38.290 12:43:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:38.290 12:43:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:38.290 12:43:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:38.290 12:43:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:38.290 12:43:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:38.290 12:43:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:38.290 12:43:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:38.290 12:43:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:38.290 12:43:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:38.290 12:43:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:38.290 12:43:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.290 12:43:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:38.290 12:43:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:38.290 12:43:10 -- nvmf/common.sh@104 -- # continue 2 00:15:38.290 12:43:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:38.290 12:43:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.290 12:43:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:38.290 12:43:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.290 12:43:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:38.290 12:43:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:38.290 12:43:10 -- nvmf/common.sh@104 -- # continue 2 00:15:38.290 12:43:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:38.290 12:43:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:38.290 12:43:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:38.290 12:43:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:38.290 12:43:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:38.290 12:43:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:38.290 12:43:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:38.290 12:43:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:38.290 192.168.100.9' 00:15:38.290 12:43:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:38.290 192.168.100.9' 00:15:38.290 12:43:10 -- nvmf/common.sh@445 -- # head -n 1 00:15:38.290 12:43:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:38.290 12:43:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:38.290 192.168.100.9' 00:15:38.290 12:43:10 -- nvmf/common.sh@446 -- # tail -n +2 00:15:38.290 12:43:10 -- nvmf/common.sh@446 -- # head -n 1 00:15:38.290 12:43:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:38.290 12:43:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:38.290 12:43:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:38.290 12:43:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:38.290 12:43:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:38.290 12:43:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:38.290 12:43:10 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:38.290 12:43:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:38.290 12:43:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.290 12:43:10 -- common/autotest_common.sh@10 -- # set +x 00:15:38.290 12:43:10 -- nvmf/common.sh@469 -- # nvmfpid=459228 00:15:38.290 12:43:10 -- nvmf/common.sh@470 -- # waitforlisten 459228 00:15:38.290 12:43:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:38.290 12:43:10 -- common/autotest_common.sh@829 -- # '[' -z 459228 ']' 00:15:38.290 12:43:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.290 12:43:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.290 12:43:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.290 12:43:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.290 12:43:10 -- common/autotest_common.sh@10 -- # set +x 00:15:38.290 [2024-11-20 12:43:10.452175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:38.290 [2024-11-20 12:43:10.452240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.290 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.290 [2024-11-20 12:43:10.518897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.290 [2024-11-20 12:43:10.594636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:38.290 [2024-11-20 12:43:10.594773] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.290 [2024-11-20 12:43:10.594783] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.290 [2024-11-20 12:43:10.594791] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.290 [2024-11-20 12:43:10.594933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.290 [2024-11-20 12:43:10.595039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.290 [2024-11-20 12:43:10.595372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.290 [2024-11-20 12:43:10.595373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.290 12:43:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.290 12:43:11 -- common/autotest_common.sh@862 -- # return 0 00:15:38.290 12:43:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:38.290 12:43:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:38.290 12:43:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.290 12:43:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.290 12:43:11 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:38.290 12:43:11 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29825 00:15:38.553 [2024-11-20 12:43:11.435622] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:38.553 12:43:11 -- target/invalid.sh@40 -- # out='request: 00:15:38.553 { 00:15:38.553 "nqn": "nqn.2016-06.io.spdk:cnode29825", 00:15:38.553 "tgt_name": "foobar", 00:15:38.553 "method": "nvmf_create_subsystem", 00:15:38.553 "req_id": 1 00:15:38.553 } 00:15:38.553 Got JSON-RPC error response 00:15:38.553 response: 00:15:38.553 { 00:15:38.553 "code": -32603, 00:15:38.553 "message": "Unable to find target foobar" 00:15:38.553 }' 00:15:38.553 12:43:11 -- target/invalid.sh@41 -- # [[ request: 00:15:38.553 { 00:15:38.553 "nqn": "nqn.2016-06.io.spdk:cnode29825", 00:15:38.553 "tgt_name": "foobar", 00:15:38.553 "method": "nvmf_create_subsystem", 00:15:38.553 "req_id": 1 00:15:38.553 } 00:15:38.553 Got JSON-RPC error response 00:15:38.553 response: 00:15:38.553 { 00:15:38.553 "code": -32603, 00:15:38.553 "message": "Unable to find target foobar" 00:15:38.553 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:38.553 12:43:11 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:38.553 12:43:11 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32428 00:15:38.553 [2024-11-20 12:43:11.612253] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32428: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:38.553 12:43:11 -- target/invalid.sh@45 -- # out='request: 00:15:38.553 { 00:15:38.553 "nqn": "nqn.2016-06.io.spdk:cnode32428", 00:15:38.553 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:38.553 "method": "nvmf_create_subsystem", 00:15:38.553 "req_id": 1 00:15:38.553 } 00:15:38.553 Got JSON-RPC error response 00:15:38.553 response: 00:15:38.553 { 00:15:38.553 "code": -32602, 00:15:38.553 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:38.553 }' 00:15:38.553 12:43:11 -- target/invalid.sh@46 -- # [[ request: 00:15:38.553 { 00:15:38.553 "nqn": "nqn.2016-06.io.spdk:cnode32428", 00:15:38.553 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:38.553 "method": "nvmf_create_subsystem", 00:15:38.553 "req_id": 1 00:15:38.553 } 00:15:38.553 Got JSON-RPC error response 00:15:38.553 response: 00:15:38.553 { 00:15:38.553 "code": -32602, 00:15:38.553 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:38.553 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:38.553 12:43:11 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:38.553 12:43:11 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6468 00:15:38.815 [2024-11-20 12:43:11.788773] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6468: invalid model number 'SPDK_Controller' 00:15:38.815 12:43:11 -- target/invalid.sh@50 -- # out='request: 00:15:38.815 { 00:15:38.816 "nqn": "nqn.2016-06.io.spdk:cnode6468", 00:15:38.816 "model_number": "SPDK_Controller\u001f", 00:15:38.816 "method": "nvmf_create_subsystem", 00:15:38.816 "req_id": 1 00:15:38.816 } 00:15:38.816 Got JSON-RPC error response 00:15:38.816 response: 00:15:38.816 { 00:15:38.816 "code": -32602, 00:15:38.816 "message": "Invalid MN SPDK_Controller\u001f" 00:15:38.816 }' 00:15:38.816 12:43:11 -- target/invalid.sh@51 -- # [[ request: 00:15:38.816 { 00:15:38.816 "nqn": "nqn.2016-06.io.spdk:cnode6468", 00:15:38.816 "model_number": "SPDK_Controller\u001f", 00:15:38.816 "method": "nvmf_create_subsystem", 00:15:38.816 "req_id": 1 00:15:38.816 } 00:15:38.816 Got JSON-RPC error response 00:15:38.816 response: 00:15:38.816 { 00:15:38.816 "code": -32602, 00:15:38.816 "message": "Invalid MN SPDK_Controller\u001f" 00:15:38.816 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:38.816 12:43:11 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:38.816 12:43:11 -- target/invalid.sh@19 -- # local length=21 ll 00:15:38.816 12:43:11 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:38.816 12:43:11 -- target/invalid.sh@21 -- # local chars 00:15:38.816 12:43:11 -- target/invalid.sh@22 -- # local string 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 103 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+=g 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 69 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+=E 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 48 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+=0 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 40 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+='(' 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 88 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+=X 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 60 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+='<' 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 62 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+='>' 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 78 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+=N 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 46 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+=. 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 64 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+=@ 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 69 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+=E 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 75 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+=K 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # printf %x 60 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:38.816 12:43:11 -- target/invalid.sh@25 -- # string+='<' 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.816 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # printf %x 127 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # printf %x 119 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # string+=w 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # printf %x 77 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # string+=M 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # printf %x 101 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # string+=e 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # printf %x 79 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # string+=O 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # printf %x 110 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # string+=n 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # printf %x 52 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # string+=4 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # printf %x 59 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:39.078 12:43:11 -- target/invalid.sh@25 -- # string+=';' 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.078 12:43:11 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:11 -- target/invalid.sh@28 -- # [[ g == \- ]] 00:15:39.078 12:43:11 -- target/invalid.sh@31 -- # echo 'gE0(X<>N.@EK<wMeOn4;' 00:15:39.078 12:43:11 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'gE0(X<>N.@EK<wMeOn4;' nqn.2016-06.io.spdk:cnode8317 00:15:39.078 [2024-11-20 12:43:12.125862] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8317: invalid serial number 'gE0(X<>N.@EK<wMeOn4;' 00:15:39.078 12:43:12 -- target/invalid.sh@54 -- # out='request: 00:15:39.078 { 00:15:39.078 "nqn": "nqn.2016-06.io.spdk:cnode8317", 00:15:39.078 "serial_number": "gE0(X<>N.@EK<\u007fwMeOn4;", 00:15:39.078 "method": "nvmf_create_subsystem", 00:15:39.078 "req_id": 1 00:15:39.078 } 00:15:39.078 Got JSON-RPC error response 00:15:39.078 response: 00:15:39.078 { 00:15:39.078 "code": -32602, 00:15:39.078 "message": "Invalid SN gE0(X<>N.@EK<\u007fwMeOn4;" 00:15:39.078 }' 00:15:39.078 12:43:12 -- target/invalid.sh@55 -- # [[ request: 00:15:39.078 { 00:15:39.078 "nqn": "nqn.2016-06.io.spdk:cnode8317", 00:15:39.078 "serial_number": "gE0(X<>N.@EK<\u007fwMeOn4;", 00:15:39.078 "method": "nvmf_create_subsystem", 00:15:39.078 "req_id": 1 00:15:39.078 } 00:15:39.078 Got JSON-RPC error response 00:15:39.078 response: 00:15:39.078 { 00:15:39.078 "code": -32602, 00:15:39.078 "message": "Invalid SN gE0(X<>N.@EK<\u007fwMeOn4;" 00:15:39.078 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:39.078 12:43:12 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:39.078 12:43:12 -- target/invalid.sh@19 -- # local length=41 ll 00:15:39.078 12:43:12 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:39.078 12:43:12 -- target/invalid.sh@21 -- # local chars 00:15:39.078 12:43:12 -- target/invalid.sh@22 -- # local string 00:15:39.078 12:43:12 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:39.078 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:12 -- target/invalid.sh@25 -- # printf %x 66 00:15:39.078 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:39.078 12:43:12 -- target/invalid.sh@25 -- # string+=B 00:15:39.078 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.078 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:12 -- target/invalid.sh@25 -- # printf %x 106 00:15:39.078 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:39.078 12:43:12 -- target/invalid.sh@25 -- # string+=j 00:15:39.078 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.078 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.078 12:43:12 -- target/invalid.sh@25 -- # printf %x 50 00:15:39.078 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=2 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 127 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 88 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=X 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 47 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=/ 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 59 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=';' 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 105 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=i 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 109 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=m 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 74 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=J 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 47 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=/ 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 59 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=';' 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 84 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=T 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 113 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=q 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 40 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+='(' 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 101 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=e 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 59 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=';' 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 42 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+='*' 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 53 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # string+=5 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.341 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.341 12:43:12 -- target/invalid.sh@25 -- # printf %x 47 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=/ 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 36 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+='$' 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 52 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=4 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 53 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=5 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 110 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=n 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 45 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=- 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 63 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+='?' 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 123 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+='{' 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 84 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=T 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 80 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=P 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 97 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=a 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 44 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=, 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 96 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+='`' 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 43 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=+ 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 79 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=O 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 57 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=9 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 100 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=d 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 92 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+='\' 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 45 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # string+=- 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.342 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.342 12:43:12 -- target/invalid.sh@25 -- # printf %x 114 00:15:39.604 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:39.604 12:43:12 -- target/invalid.sh@25 -- # string+=r 00:15:39.604 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.604 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.604 12:43:12 -- target/invalid.sh@25 -- # printf %x 35 00:15:39.604 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:39.604 12:43:12 -- target/invalid.sh@25 -- # string+='#' 00:15:39.604 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.604 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.604 12:43:12 -- target/invalid.sh@25 -- # printf %x 117 00:15:39.604 12:43:12 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:39.604 12:43:12 -- target/invalid.sh@25 -- # string+=u 00:15:39.604 12:43:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:39.604 12:43:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:39.604 12:43:12 -- target/invalid.sh@28 -- # [[ B == \- ]] 00:15:39.604 12:43:12 -- target/invalid.sh@31 -- # echo 'Bj2X/;imJ/;Tq(e;*5/$45n-?{TPa,`+O9d\-r#u' 00:15:39.604 12:43:12 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Bj2X/;imJ/;Tq(e;*5/$45n-?{TPa,`+O9d\-r#u' nqn.2016-06.io.spdk:cnode7900 00:15:39.604 [2024-11-20 12:43:12.611457] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7900: invalid model number 'Bj2X/;imJ/;Tq(e;*5/$45n-?{TPa,`+O9d\-r#u' 00:15:39.604 12:43:12 -- target/invalid.sh@58 -- # out='request: 00:15:39.604 { 00:15:39.604 "nqn": "nqn.2016-06.io.spdk:cnode7900", 00:15:39.604 "model_number": "Bj2\u007fX/;imJ/;Tq(e;*5/$45n-?{TPa,`+O9d\\-r#u", 00:15:39.604 "method": "nvmf_create_subsystem", 00:15:39.604 "req_id": 1 00:15:39.604 } 00:15:39.604 Got JSON-RPC error response 00:15:39.604 response: 00:15:39.604 { 00:15:39.604 "code": -32602, 00:15:39.604 "message": "Invalid MN Bj2\u007fX/;imJ/;Tq(e;*5/$45n-?{TPa,`+O9d\\-r#u" 00:15:39.604 }' 00:15:39.604 12:43:12 -- target/invalid.sh@59 -- # [[ request: 00:15:39.604 { 00:15:39.604 "nqn": "nqn.2016-06.io.spdk:cnode7900", 00:15:39.604 "model_number": "Bj2\u007fX/;imJ/;Tq(e;*5/$45n-?{TPa,`+O9d\\-r#u", 00:15:39.604 "method": "nvmf_create_subsystem", 00:15:39.604 "req_id": 1 00:15:39.604 } 00:15:39.604 Got JSON-RPC error response 00:15:39.604 response: 00:15:39.604 { 00:15:39.604 "code": -32602, 00:15:39.604 "message": "Invalid MN Bj2\u007fX/;imJ/;Tq(e;*5/$45n-?{TPa,`+O9d\\-r#u" 00:15:39.604 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:39.604 12:43:12 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:39.865 [2024-11-20 12:43:12.818602] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20cd080/0x20d1570) succeed. 00:15:39.865 [2024-11-20 12:43:12.833337] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20ce670/0x2112c10) succeed. 00:15:40.126 12:43:12 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:40.126 12:43:13 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:40.126 12:43:13 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:40.126 192.168.100.9' 00:15:40.126 12:43:13 -- target/invalid.sh@67 -- # head -n 1 00:15:40.126 12:43:13 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:40.126 12:43:13 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:40.387 [2024-11-20 12:43:13.305366] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:40.387 12:43:13 -- target/invalid.sh@69 -- # out='request: 00:15:40.387 { 00:15:40.387 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:40.387 "listen_address": { 00:15:40.387 "trtype": "rdma", 00:15:40.387 "traddr": "192.168.100.8", 00:15:40.387 "trsvcid": "4421" 00:15:40.387 }, 00:15:40.387 "method": "nvmf_subsystem_remove_listener", 00:15:40.387 "req_id": 1 00:15:40.387 } 00:15:40.387 Got JSON-RPC error response 00:15:40.387 response: 00:15:40.387 { 00:15:40.387 "code": -32602, 00:15:40.387 "message": "Invalid parameters" 00:15:40.387 }' 00:15:40.387 12:43:13 -- target/invalid.sh@70 -- # [[ request: 00:15:40.387 { 00:15:40.387 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:40.387 "listen_address": { 00:15:40.387 "trtype": "rdma", 00:15:40.387 "traddr": "192.168.100.8", 00:15:40.387 "trsvcid": "4421" 00:15:40.387 }, 00:15:40.387 "method": "nvmf_subsystem_remove_listener", 00:15:40.387 "req_id": 1 00:15:40.387 } 00:15:40.387 Got JSON-RPC error response 00:15:40.387 response: 00:15:40.387 { 00:15:40.387 "code": -32602, 00:15:40.387 "message": "Invalid parameters" 00:15:40.387 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:40.387 12:43:13 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22092 -i 0 00:15:40.387 [2024-11-20 12:43:13.485928] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22092: invalid cntlid range [0-65519] 00:15:40.650 12:43:13 -- target/invalid.sh@73 -- # out='request: 00:15:40.650 { 00:15:40.650 "nqn": "nqn.2016-06.io.spdk:cnode22092", 00:15:40.650 "min_cntlid": 0, 00:15:40.650 "method": "nvmf_create_subsystem", 00:15:40.650 "req_id": 1 00:15:40.650 } 00:15:40.650 Got JSON-RPC error response 00:15:40.650 response: 00:15:40.650 { 00:15:40.650 "code": -32602, 00:15:40.650 "message": "Invalid cntlid range [0-65519]" 00:15:40.650 }' 00:15:40.650 12:43:13 -- target/invalid.sh@74 -- # [[ request: 00:15:40.650 { 00:15:40.650 "nqn": "nqn.2016-06.io.spdk:cnode22092", 00:15:40.650 "min_cntlid": 0, 00:15:40.650 "method": "nvmf_create_subsystem", 00:15:40.650 "req_id": 1 00:15:40.650 } 00:15:40.650 Got JSON-RPC error response 00:15:40.650 response: 00:15:40.650 { 00:15:40.650 "code": -32602, 00:15:40.650 "message": "Invalid cntlid range [0-65519]" 00:15:40.650 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:40.650 12:43:13 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31010 -i 65520 00:15:40.650 [2024-11-20 12:43:13.662586] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31010: invalid cntlid range [65520-65519] 00:15:40.650 12:43:13 -- target/invalid.sh@75 -- # out='request: 00:15:40.650 { 00:15:40.650 "nqn": "nqn.2016-06.io.spdk:cnode31010", 00:15:40.650 "min_cntlid": 65520, 00:15:40.650 "method": "nvmf_create_subsystem", 00:15:40.650 "req_id": 1 00:15:40.650 } 00:15:40.650 Got JSON-RPC error response 00:15:40.650 response: 00:15:40.650 { 00:15:40.650 "code": -32602, 00:15:40.650 "message": "Invalid cntlid range [65520-65519]" 00:15:40.650 }' 00:15:40.650 12:43:13 -- target/invalid.sh@76 -- # [[ request: 00:15:40.650 { 00:15:40.650 "nqn": "nqn.2016-06.io.spdk:cnode31010", 00:15:40.650 "min_cntlid": 65520, 00:15:40.650 "method": "nvmf_create_subsystem", 00:15:40.650 "req_id": 1 00:15:40.650 } 00:15:40.650 Got JSON-RPC error response 00:15:40.650 response: 00:15:40.650 { 00:15:40.650 "code": -32602, 00:15:40.650 "message": "Invalid cntlid range [65520-65519]" 00:15:40.650 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:40.650 12:43:13 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8806 -I 0 00:15:40.911 [2024-11-20 12:43:13.835198] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8806: invalid cntlid range [1-0] 00:15:40.911 12:43:13 -- target/invalid.sh@77 -- # out='request: 00:15:40.911 { 00:15:40.911 "nqn": "nqn.2016-06.io.spdk:cnode8806", 00:15:40.911 "max_cntlid": 0, 00:15:40.911 "method": "nvmf_create_subsystem", 00:15:40.911 "req_id": 1 00:15:40.911 } 00:15:40.911 Got JSON-RPC error response 00:15:40.911 response: 00:15:40.911 { 00:15:40.911 "code": -32602, 00:15:40.911 "message": "Invalid cntlid range [1-0]" 00:15:40.911 }' 00:15:40.911 12:43:13 -- target/invalid.sh@78 -- # [[ request: 00:15:40.911 { 00:15:40.911 "nqn": "nqn.2016-06.io.spdk:cnode8806", 00:15:40.911 "max_cntlid": 0, 00:15:40.911 "method": "nvmf_create_subsystem", 00:15:40.912 "req_id": 1 00:15:40.912 } 00:15:40.912 Got JSON-RPC error response 00:15:40.912 response: 00:15:40.912 { 00:15:40.912 "code": -32602, 00:15:40.912 "message": "Invalid cntlid range [1-0]" 00:15:40.912 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:40.912 12:43:13 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27976 -I 65520 00:15:40.912 [2024-11-20 12:43:14.011848] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27976: invalid cntlid range [1-65520] 00:15:41.173 12:43:14 -- target/invalid.sh@79 -- # out='request: 00:15:41.173 { 00:15:41.173 "nqn": "nqn.2016-06.io.spdk:cnode27976", 00:15:41.173 "max_cntlid": 65520, 00:15:41.173 "method": "nvmf_create_subsystem", 00:15:41.173 "req_id": 1 00:15:41.173 } 00:15:41.173 Got JSON-RPC error response 00:15:41.173 response: 00:15:41.173 { 00:15:41.173 "code": -32602, 00:15:41.173 "message": "Invalid cntlid range [1-65520]" 00:15:41.173 }' 00:15:41.173 12:43:14 -- target/invalid.sh@80 -- # [[ request: 00:15:41.173 { 00:15:41.173 "nqn": "nqn.2016-06.io.spdk:cnode27976", 00:15:41.173 "max_cntlid": 65520, 00:15:41.173 "method": "nvmf_create_subsystem", 00:15:41.173 "req_id": 1 00:15:41.173 } 00:15:41.173 Got JSON-RPC error response 00:15:41.173 response: 00:15:41.173 { 00:15:41.173 "code": -32602, 00:15:41.173 "message": "Invalid cntlid range [1-65520]" 00:15:41.173 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:41.173 12:43:14 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8297 -i 6 -I 5 00:15:41.173 [2024-11-20 12:43:14.192479] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8297: invalid cntlid range [6-5] 00:15:41.173 12:43:14 -- target/invalid.sh@83 -- # out='request: 00:15:41.173 { 00:15:41.173 "nqn": "nqn.2016-06.io.spdk:cnode8297", 00:15:41.173 "min_cntlid": 6, 00:15:41.173 "max_cntlid": 5, 00:15:41.173 "method": "nvmf_create_subsystem", 00:15:41.173 "req_id": 1 00:15:41.173 } 00:15:41.173 Got JSON-RPC error response 00:15:41.173 response: 00:15:41.173 { 00:15:41.173 "code": -32602, 00:15:41.173 "message": "Invalid cntlid range [6-5]" 00:15:41.173 }' 00:15:41.173 12:43:14 -- target/invalid.sh@84 -- # [[ request: 00:15:41.173 { 00:15:41.173 "nqn": "nqn.2016-06.io.spdk:cnode8297", 00:15:41.173 "min_cntlid": 6, 00:15:41.173 "max_cntlid": 5, 00:15:41.173 "method": "nvmf_create_subsystem", 00:15:41.173 "req_id": 1 00:15:41.173 } 00:15:41.173 Got JSON-RPC error response 00:15:41.173 response: 00:15:41.173 { 00:15:41.173 "code": -32602, 00:15:41.173 "message": "Invalid cntlid range [6-5]" 00:15:41.173 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:41.173 12:43:14 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:41.435 12:43:14 -- target/invalid.sh@87 -- # out='request: 00:15:41.435 { 00:15:41.435 "name": "foobar", 00:15:41.435 "method": "nvmf_delete_target", 00:15:41.435 "req_id": 1 00:15:41.435 } 00:15:41.435 Got JSON-RPC error response 00:15:41.435 response: 00:15:41.435 { 00:15:41.435 "code": -32602, 00:15:41.435 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:41.435 }' 00:15:41.435 12:43:14 -- target/invalid.sh@88 -- # [[ request: 00:15:41.435 { 00:15:41.435 "name": "foobar", 00:15:41.435 "method": "nvmf_delete_target", 00:15:41.435 "req_id": 1 00:15:41.435 } 00:15:41.435 Got JSON-RPC error response 00:15:41.435 response: 00:15:41.435 { 00:15:41.435 "code": -32602, 00:15:41.435 "message": "The specified target doesn't exist, cannot delete it." 00:15:41.435 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:41.435 12:43:14 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:41.435 12:43:14 -- target/invalid.sh@91 -- # nvmftestfini 00:15:41.435 12:43:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:41.435 12:43:14 -- nvmf/common.sh@116 -- # sync 00:15:41.435 12:43:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:41.435 12:43:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:41.435 12:43:14 -- nvmf/common.sh@119 -- # set +e 00:15:41.435 12:43:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:41.435 12:43:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:41.435 rmmod nvme_rdma 00:15:41.435 rmmod nvme_fabrics 00:15:41.435 12:43:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:41.435 12:43:14 -- nvmf/common.sh@123 -- # set -e 00:15:41.435 12:43:14 -- nvmf/common.sh@124 -- # return 0 00:15:41.435 12:43:14 -- nvmf/common.sh@477 -- # '[' -n 459228 ']' 00:15:41.435 12:43:14 -- nvmf/common.sh@478 -- # killprocess 459228 00:15:41.435 12:43:14 -- common/autotest_common.sh@936 -- # '[' -z 459228 ']' 00:15:41.435 12:43:14 -- common/autotest_common.sh@940 -- # kill -0 459228 00:15:41.435 12:43:14 -- common/autotest_common.sh@941 -- # uname 00:15:41.435 12:43:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:41.435 12:43:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 459228 00:15:41.435 12:43:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:41.435 12:43:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:41.435 12:43:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 459228' 00:15:41.435 killing process with pid 459228 00:15:41.435 12:43:14 -- common/autotest_common.sh@955 -- # kill 459228 00:15:41.435 12:43:14 -- common/autotest_common.sh@960 -- # wait 459228 00:15:41.697 12:43:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:41.697 12:43:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:41.697 00:15:41.697 real 0m11.752s 00:15:41.697 user 0m20.478s 00:15:41.697 sys 0m6.464s 00:15:41.697 12:43:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:41.697 12:43:14 -- common/autotest_common.sh@10 -- # set +x 00:15:41.697 ************************************ 00:15:41.697 END TEST nvmf_invalid 00:15:41.697 ************************************ 00:15:41.697 12:43:14 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:41.697 12:43:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:41.697 12:43:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.697 12:43:14 -- common/autotest_common.sh@10 -- # set +x 00:15:41.697 ************************************ 00:15:41.697 START TEST nvmf_abort 00:15:41.697 ************************************ 00:15:41.697 12:43:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:41.697 * Looking for test storage... 00:15:41.697 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:41.697 12:43:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:41.697 12:43:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:41.697 12:43:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:41.959 12:43:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:41.959 12:43:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:41.959 12:43:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:41.959 12:43:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:41.959 12:43:14 -- scripts/common.sh@335 -- # IFS=.-: 00:15:41.959 12:43:14 -- scripts/common.sh@335 -- # read -ra ver1 00:15:41.959 12:43:14 -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.959 12:43:14 -- scripts/common.sh@336 -- # read -ra ver2 00:15:41.959 12:43:14 -- scripts/common.sh@337 -- # local 'op=<' 00:15:41.959 12:43:14 -- scripts/common.sh@339 -- # ver1_l=2 00:15:41.959 12:43:14 -- scripts/common.sh@340 -- # ver2_l=1 00:15:41.959 12:43:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:41.959 12:43:14 -- scripts/common.sh@343 -- # case "$op" in 00:15:41.959 12:43:14 -- scripts/common.sh@344 -- # : 1 00:15:41.959 12:43:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:41.959 12:43:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.959 12:43:14 -- scripts/common.sh@364 -- # decimal 1 00:15:41.960 12:43:14 -- scripts/common.sh@352 -- # local d=1 00:15:41.960 12:43:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.960 12:43:14 -- scripts/common.sh@354 -- # echo 1 00:15:41.960 12:43:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:41.960 12:43:14 -- scripts/common.sh@365 -- # decimal 2 00:15:41.960 12:43:14 -- scripts/common.sh@352 -- # local d=2 00:15:41.960 12:43:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.960 12:43:14 -- scripts/common.sh@354 -- # echo 2 00:15:41.960 12:43:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:41.960 12:43:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:41.960 12:43:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:41.960 12:43:14 -- scripts/common.sh@367 -- # return 0 00:15:41.960 12:43:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.960 12:43:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:41.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.960 --rc genhtml_branch_coverage=1 00:15:41.960 --rc genhtml_function_coverage=1 00:15:41.960 --rc genhtml_legend=1 00:15:41.960 --rc geninfo_all_blocks=1 00:15:41.960 --rc geninfo_unexecuted_blocks=1 00:15:41.960 00:15:41.960 ' 00:15:41.960 12:43:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:41.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.960 --rc genhtml_branch_coverage=1 00:15:41.960 --rc genhtml_function_coverage=1 00:15:41.960 --rc genhtml_legend=1 00:15:41.960 --rc geninfo_all_blocks=1 00:15:41.960 --rc geninfo_unexecuted_blocks=1 00:15:41.960 00:15:41.960 ' 00:15:41.960 12:43:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:41.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.960 --rc genhtml_branch_coverage=1 00:15:41.960 --rc genhtml_function_coverage=1 00:15:41.960 --rc genhtml_legend=1 00:15:41.960 --rc geninfo_all_blocks=1 00:15:41.960 --rc geninfo_unexecuted_blocks=1 00:15:41.960 00:15:41.960 ' 00:15:41.960 12:43:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:41.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.960 --rc genhtml_branch_coverage=1 00:15:41.960 --rc genhtml_function_coverage=1 00:15:41.960 --rc genhtml_legend=1 00:15:41.960 --rc geninfo_all_blocks=1 00:15:41.960 --rc geninfo_unexecuted_blocks=1 00:15:41.960 00:15:41.960 ' 00:15:41.960 12:43:14 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.960 12:43:14 -- nvmf/common.sh@7 -- # uname -s 00:15:41.960 12:43:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.960 12:43:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.960 12:43:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.960 12:43:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.960 12:43:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.960 12:43:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.960 12:43:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.960 12:43:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.960 12:43:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.960 12:43:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.960 12:43:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:41.960 12:43:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:41.960 12:43:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.960 12:43:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.960 12:43:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.960 12:43:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:41.960 12:43:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.960 12:43:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.960 12:43:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.960 12:43:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.960 12:43:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.960 12:43:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.960 12:43:14 -- paths/export.sh@5 -- # export PATH 00:15:41.960 12:43:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.960 12:43:14 -- nvmf/common.sh@46 -- # : 0 00:15:41.960 12:43:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:41.960 12:43:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:41.960 12:43:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:41.960 12:43:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.960 12:43:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.960 12:43:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:41.960 12:43:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:41.960 12:43:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:41.960 12:43:14 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.960 12:43:14 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:41.960 12:43:14 -- target/abort.sh@14 -- # nvmftestinit 00:15:41.960 12:43:14 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:41.960 12:43:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.960 12:43:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:41.960 12:43:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:41.960 12:43:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:41.960 12:43:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.960 12:43:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.960 12:43:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.960 12:43:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:41.960 12:43:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:41.960 12:43:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:41.960 12:43:14 -- common/autotest_common.sh@10 -- # set +x 00:15:48.556 12:43:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:48.557 12:43:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:48.557 12:43:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:48.557 12:43:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:48.557 12:43:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:48.557 12:43:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:48.557 12:43:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:48.557 12:43:21 -- nvmf/common.sh@294 -- # net_devs=() 00:15:48.557 12:43:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:48.557 12:43:21 -- nvmf/common.sh@295 -- # e810=() 00:15:48.557 12:43:21 -- nvmf/common.sh@295 -- # local -ga e810 00:15:48.557 12:43:21 -- nvmf/common.sh@296 -- # x722=() 00:15:48.557 12:43:21 -- nvmf/common.sh@296 -- # local -ga x722 00:15:48.557 12:43:21 -- nvmf/common.sh@297 -- # mlx=() 00:15:48.557 12:43:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:48.557 12:43:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.557 12:43:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:48.557 12:43:21 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:48.557 12:43:21 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:48.557 12:43:21 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:48.557 12:43:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:48.557 12:43:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:48.557 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:48.557 12:43:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:48.557 12:43:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:48.557 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:48.557 12:43:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:48.557 12:43:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:48.557 12:43:21 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.557 12:43:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:48.557 12:43:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.557 12:43:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:48.557 Found net devices under 0000:98:00.0: mlx_0_0 00:15:48.557 12:43:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.557 12:43:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.557 12:43:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:48.557 12:43:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.557 12:43:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:48.557 Found net devices under 0000:98:00.1: mlx_0_1 00:15:48.557 12:43:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.557 12:43:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:48.557 12:43:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:48.557 12:43:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:48.557 12:43:21 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:48.557 12:43:21 -- nvmf/common.sh@57 -- # uname 00:15:48.557 12:43:21 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:48.557 12:43:21 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:48.557 12:43:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:48.557 12:43:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:48.557 12:43:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:48.557 12:43:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:48.557 12:43:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:48.557 12:43:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:48.557 12:43:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:48.557 12:43:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:48.557 12:43:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:48.557 12:43:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:48.557 12:43:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:48.557 12:43:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:48.557 12:43:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:48.557 12:43:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:48.557 12:43:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:48.557 12:43:21 -- nvmf/common.sh@104 -- # continue 2 00:15:48.557 12:43:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:48.557 12:43:21 -- nvmf/common.sh@104 -- # continue 2 00:15:48.557 12:43:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:48.557 12:43:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:48.557 12:43:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:48.557 12:43:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:48.557 12:43:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:48.557 12:43:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:48.557 12:43:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:48.557 12:43:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:48.557 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:48.557 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:15:48.557 altname enp152s0f0np0 00:15:48.557 altname ens817f0np0 00:15:48.557 inet 192.168.100.8/24 scope global mlx_0_0 00:15:48.557 valid_lft forever preferred_lft forever 00:15:48.557 12:43:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:48.557 12:43:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:48.557 12:43:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:48.557 12:43:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:48.557 12:43:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:48.557 12:43:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:48.557 12:43:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:48.557 12:43:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:48.557 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:48.557 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:15:48.557 altname enp152s0f1np1 00:15:48.557 altname ens817f1np1 00:15:48.557 inet 192.168.100.9/24 scope global mlx_0_1 00:15:48.557 valid_lft forever preferred_lft forever 00:15:48.557 12:43:21 -- nvmf/common.sh@410 -- # return 0 00:15:48.557 12:43:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:48.557 12:43:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:48.557 12:43:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:48.557 12:43:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:48.557 12:43:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:48.557 12:43:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:48.557 12:43:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:48.557 12:43:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:48.557 12:43:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:48.557 12:43:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:48.557 12:43:21 -- nvmf/common.sh@104 -- # continue 2 00:15:48.557 12:43:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:48.557 12:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:48.557 12:43:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:48.557 12:43:21 -- nvmf/common.sh@104 -- # continue 2 00:15:48.557 12:43:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:48.558 12:43:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:48.558 12:43:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:48.558 12:43:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:48.558 12:43:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:48.558 12:43:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:48.558 12:43:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:48.558 12:43:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:48.558 12:43:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:48.558 12:43:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:48.558 12:43:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:48.558 12:43:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:48.558 12:43:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:48.558 192.168.100.9' 00:15:48.558 12:43:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:48.558 192.168.100.9' 00:15:48.558 12:43:21 -- nvmf/common.sh@445 -- # head -n 1 00:15:48.558 12:43:21 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:48.558 12:43:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:48.558 192.168.100.9' 00:15:48.558 12:43:21 -- nvmf/common.sh@446 -- # tail -n +2 00:15:48.558 12:43:21 -- nvmf/common.sh@446 -- # head -n 1 00:15:48.558 12:43:21 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:48.558 12:43:21 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:48.558 12:43:21 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:48.558 12:43:21 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:48.558 12:43:21 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:48.558 12:43:21 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:48.558 12:43:21 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:48.558 12:43:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:48.558 12:43:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:48.558 12:43:21 -- common/autotest_common.sh@10 -- # set +x 00:15:48.558 12:43:21 -- nvmf/common.sh@469 -- # nvmfpid=464088 00:15:48.558 12:43:21 -- nvmf/common.sh@470 -- # waitforlisten 464088 00:15:48.558 12:43:21 -- common/autotest_common.sh@829 -- # '[' -z 464088 ']' 00:15:48.558 12:43:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.558 12:43:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.558 12:43:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.558 12:43:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.558 12:43:21 -- common/autotest_common.sh@10 -- # set +x 00:15:48.558 12:43:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:48.558 [2024-11-20 12:43:21.620834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:48.558 [2024-11-20 12:43:21.620899] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.558 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.819 [2024-11-20 12:43:21.704091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.819 [2024-11-20 12:43:21.794450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:48.819 [2024-11-20 12:43:21.794612] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.819 [2024-11-20 12:43:21.794624] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.819 [2024-11-20 12:43:21.794634] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.819 [2024-11-20 12:43:21.794781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.819 [2024-11-20 12:43:21.794954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.819 [2024-11-20 12:43:21.794955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.391 12:43:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.391 12:43:22 -- common/autotest_common.sh@862 -- # return 0 00:15:49.391 12:43:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:49.391 12:43:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:49.391 12:43:22 -- common/autotest_common.sh@10 -- # set +x 00:15:49.391 12:43:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.391 12:43:22 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:49.391 12:43:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.391 12:43:22 -- common/autotest_common.sh@10 -- # set +x 00:15:49.652 [2024-11-20 12:43:22.497754] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x242efa0/0x2433490) succeed. 00:15:49.652 [2024-11-20 12:43:22.511986] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24304f0/0x2474b30) succeed. 00:15:49.652 12:43:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.652 12:43:22 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:49.652 12:43:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.652 12:43:22 -- common/autotest_common.sh@10 -- # set +x 00:15:49.652 Malloc0 00:15:49.652 12:43:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.652 12:43:22 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:49.652 12:43:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.652 12:43:22 -- common/autotest_common.sh@10 -- # set +x 00:15:49.652 Delay0 00:15:49.652 12:43:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.652 12:43:22 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:49.652 12:43:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.652 12:43:22 -- common/autotest_common.sh@10 -- # set +x 00:15:49.652 12:43:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.652 12:43:22 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:49.652 12:43:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.652 12:43:22 -- common/autotest_common.sh@10 -- # set +x 00:15:49.652 12:43:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.652 12:43:22 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:49.652 12:43:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.652 12:43:22 -- common/autotest_common.sh@10 -- # set +x 00:15:49.652 [2024-11-20 12:43:22.674277] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:49.652 12:43:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.652 12:43:22 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:49.652 12:43:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.652 12:43:22 -- common/autotest_common.sh@10 -- # set +x 00:15:49.652 12:43:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.652 12:43:22 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:49.652 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.914 [2024-11-20 12:43:22.783851] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:51.828 Initializing NVMe Controllers 00:15:51.828 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:51.828 controller IO queue size 128 less than required 00:15:51.828 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:51.828 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:51.828 Initialization complete. Launching workers. 00:15:51.828 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37683 00:15:51.829 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37744, failed to submit 62 00:15:51.829 success 37683, unsuccess 61, failed 0 00:15:51.829 12:43:24 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:51.829 12:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.829 12:43:24 -- common/autotest_common.sh@10 -- # set +x 00:15:51.829 12:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.829 12:43:24 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:51.829 12:43:24 -- target/abort.sh@38 -- # nvmftestfini 00:15:51.829 12:43:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:51.829 12:43:24 -- nvmf/common.sh@116 -- # sync 00:15:51.829 12:43:24 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:51.829 12:43:24 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:51.829 12:43:24 -- nvmf/common.sh@119 -- # set +e 00:15:51.829 12:43:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:51.829 12:43:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:51.829 rmmod nvme_rdma 00:15:52.089 rmmod nvme_fabrics 00:15:52.089 12:43:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:52.089 12:43:24 -- nvmf/common.sh@123 -- # set -e 00:15:52.089 12:43:24 -- nvmf/common.sh@124 -- # return 0 00:15:52.089 12:43:24 -- nvmf/common.sh@477 -- # '[' -n 464088 ']' 00:15:52.089 12:43:24 -- nvmf/common.sh@478 -- # killprocess 464088 00:15:52.089 12:43:24 -- common/autotest_common.sh@936 -- # '[' -z 464088 ']' 00:15:52.089 12:43:24 -- common/autotest_common.sh@940 -- # kill -0 464088 00:15:52.089 12:43:24 -- common/autotest_common.sh@941 -- # uname 00:15:52.089 12:43:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.089 12:43:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 464088 00:15:52.089 12:43:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:52.089 12:43:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:52.089 12:43:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 464088' 00:15:52.089 killing process with pid 464088 00:15:52.089 12:43:25 -- common/autotest_common.sh@955 -- # kill 464088 00:15:52.089 12:43:25 -- common/autotest_common.sh@960 -- # wait 464088 00:15:52.350 12:43:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:52.350 12:43:25 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:52.350 00:15:52.350 real 0m10.546s 00:15:52.350 user 0m14.386s 00:15:52.350 sys 0m5.529s 00:15:52.350 12:43:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:52.350 12:43:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.350 ************************************ 00:15:52.350 END TEST nvmf_abort 00:15:52.350 ************************************ 00:15:52.350 12:43:25 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:52.350 12:43:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:52.350 12:43:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.350 12:43:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.350 ************************************ 00:15:52.350 START TEST nvmf_ns_hotplug_stress 00:15:52.350 ************************************ 00:15:52.350 12:43:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:52.350 * Looking for test storage... 00:15:52.350 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:52.350 12:43:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:52.350 12:43:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:52.350 12:43:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:52.612 12:43:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:52.612 12:43:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:52.612 12:43:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:52.612 12:43:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:52.612 12:43:25 -- scripts/common.sh@335 -- # IFS=.-: 00:15:52.612 12:43:25 -- scripts/common.sh@335 -- # read -ra ver1 00:15:52.612 12:43:25 -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.612 12:43:25 -- scripts/common.sh@336 -- # read -ra ver2 00:15:52.612 12:43:25 -- scripts/common.sh@337 -- # local 'op=<' 00:15:52.612 12:43:25 -- scripts/common.sh@339 -- # ver1_l=2 00:15:52.612 12:43:25 -- scripts/common.sh@340 -- # ver2_l=1 00:15:52.612 12:43:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:52.612 12:43:25 -- scripts/common.sh@343 -- # case "$op" in 00:15:52.612 12:43:25 -- scripts/common.sh@344 -- # : 1 00:15:52.612 12:43:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:52.612 12:43:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.612 12:43:25 -- scripts/common.sh@364 -- # decimal 1 00:15:52.612 12:43:25 -- scripts/common.sh@352 -- # local d=1 00:15:52.612 12:43:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.612 12:43:25 -- scripts/common.sh@354 -- # echo 1 00:15:52.612 12:43:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:52.612 12:43:25 -- scripts/common.sh@365 -- # decimal 2 00:15:52.612 12:43:25 -- scripts/common.sh@352 -- # local d=2 00:15:52.612 12:43:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.612 12:43:25 -- scripts/common.sh@354 -- # echo 2 00:15:52.612 12:43:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:52.612 12:43:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:52.612 12:43:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:52.612 12:43:25 -- scripts/common.sh@367 -- # return 0 00:15:52.612 12:43:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.612 12:43:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.612 --rc genhtml_branch_coverage=1 00:15:52.612 --rc genhtml_function_coverage=1 00:15:52.612 --rc genhtml_legend=1 00:15:52.612 --rc geninfo_all_blocks=1 00:15:52.612 --rc geninfo_unexecuted_blocks=1 00:15:52.612 00:15:52.612 ' 00:15:52.612 12:43:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.612 --rc genhtml_branch_coverage=1 00:15:52.612 --rc genhtml_function_coverage=1 00:15:52.612 --rc genhtml_legend=1 00:15:52.612 --rc geninfo_all_blocks=1 00:15:52.612 --rc geninfo_unexecuted_blocks=1 00:15:52.612 00:15:52.612 ' 00:15:52.612 12:43:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.612 --rc genhtml_branch_coverage=1 00:15:52.612 --rc genhtml_function_coverage=1 00:15:52.612 --rc genhtml_legend=1 00:15:52.612 --rc geninfo_all_blocks=1 00:15:52.612 --rc geninfo_unexecuted_blocks=1 00:15:52.612 00:15:52.612 ' 00:15:52.612 12:43:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.612 --rc genhtml_branch_coverage=1 00:15:52.612 --rc genhtml_function_coverage=1 00:15:52.612 --rc genhtml_legend=1 00:15:52.612 --rc geninfo_all_blocks=1 00:15:52.612 --rc geninfo_unexecuted_blocks=1 00:15:52.612 00:15:52.612 ' 00:15:52.612 12:43:25 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.612 12:43:25 -- nvmf/common.sh@7 -- # uname -s 00:15:52.612 12:43:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.612 12:43:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.612 12:43:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.612 12:43:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.612 12:43:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.612 12:43:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.612 12:43:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.612 12:43:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.612 12:43:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.612 12:43:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.612 12:43:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:52.612 12:43:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:52.612 12:43:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.612 12:43:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.612 12:43:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.613 12:43:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:52.613 12:43:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.613 12:43:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.613 12:43:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.613 12:43:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.613 12:43:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.613 12:43:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.613 12:43:25 -- paths/export.sh@5 -- # export PATH 00:15:52.613 12:43:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.613 12:43:25 -- nvmf/common.sh@46 -- # : 0 00:15:52.613 12:43:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:52.613 12:43:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:52.613 12:43:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:52.613 12:43:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.613 12:43:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.613 12:43:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:52.613 12:43:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:52.613 12:43:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:52.613 12:43:25 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:52.613 12:43:25 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:52.613 12:43:25 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:52.613 12:43:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.613 12:43:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:52.613 12:43:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:52.613 12:43:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:52.613 12:43:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.613 12:43:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.613 12:43:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.613 12:43:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:52.613 12:43:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:52.613 12:43:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:52.613 12:43:25 -- common/autotest_common.sh@10 -- # set +x 00:16:00.764 12:43:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:00.764 12:43:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:00.764 12:43:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:00.764 12:43:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:00.764 12:43:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:00.764 12:43:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:00.764 12:43:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:00.764 12:43:32 -- nvmf/common.sh@294 -- # net_devs=() 00:16:00.764 12:43:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:00.764 12:43:32 -- nvmf/common.sh@295 -- # e810=() 00:16:00.764 12:43:32 -- nvmf/common.sh@295 -- # local -ga e810 00:16:00.764 12:43:32 -- nvmf/common.sh@296 -- # x722=() 00:16:00.764 12:43:32 -- nvmf/common.sh@296 -- # local -ga x722 00:16:00.764 12:43:32 -- nvmf/common.sh@297 -- # mlx=() 00:16:00.764 12:43:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:00.764 12:43:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.764 12:43:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:00.765 12:43:32 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:00.765 12:43:32 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:00.765 12:43:32 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:00.765 12:43:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:00.765 12:43:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:00.765 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:00.765 12:43:32 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:00.765 12:43:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:00.765 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:00.765 12:43:32 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:00.765 12:43:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:00.765 12:43:32 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.765 12:43:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:00.765 12:43:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.765 12:43:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:00.765 Found net devices under 0000:98:00.0: mlx_0_0 00:16:00.765 12:43:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.765 12:43:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.765 12:43:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:00.765 12:43:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.765 12:43:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:00.765 Found net devices under 0000:98:00.1: mlx_0_1 00:16:00.765 12:43:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.765 12:43:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:00.765 12:43:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:00.765 12:43:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:00.765 12:43:32 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:00.765 12:43:32 -- nvmf/common.sh@57 -- # uname 00:16:00.765 12:43:32 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:00.765 12:43:32 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:00.765 12:43:32 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:00.765 12:43:32 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:00.765 12:43:32 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:00.765 12:43:32 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:00.765 12:43:32 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:00.765 12:43:32 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:00.765 12:43:32 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:00.765 12:43:32 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:00.765 12:43:32 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:00.765 12:43:32 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:00.765 12:43:32 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:00.765 12:43:32 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:00.765 12:43:32 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:00.765 12:43:32 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:00.765 12:43:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:00.765 12:43:32 -- nvmf/common.sh@104 -- # continue 2 00:16:00.765 12:43:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:00.765 12:43:32 -- nvmf/common.sh@104 -- # continue 2 00:16:00.765 12:43:32 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:00.765 12:43:32 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:00.765 12:43:32 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:00.765 12:43:32 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:00.765 12:43:32 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:00.765 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:00.765 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:16:00.765 altname enp152s0f0np0 00:16:00.765 altname ens817f0np0 00:16:00.765 inet 192.168.100.8/24 scope global mlx_0_0 00:16:00.765 valid_lft forever preferred_lft forever 00:16:00.765 12:43:32 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:00.765 12:43:32 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:00.765 12:43:32 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:00.765 12:43:32 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:00.765 12:43:32 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:00.765 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:00.765 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:16:00.765 altname enp152s0f1np1 00:16:00.765 altname ens817f1np1 00:16:00.765 inet 192.168.100.9/24 scope global mlx_0_1 00:16:00.765 valid_lft forever preferred_lft forever 00:16:00.765 12:43:32 -- nvmf/common.sh@410 -- # return 0 00:16:00.765 12:43:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:00.765 12:43:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:00.765 12:43:32 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:00.765 12:43:32 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:00.765 12:43:32 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:00.765 12:43:32 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:00.765 12:43:32 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:00.765 12:43:32 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:00.765 12:43:32 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:00.765 12:43:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:00.765 12:43:32 -- nvmf/common.sh@104 -- # continue 2 00:16:00.765 12:43:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.765 12:43:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:00.765 12:43:32 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:00.765 12:43:32 -- nvmf/common.sh@104 -- # continue 2 00:16:00.765 12:43:32 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:00.765 12:43:32 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:00.765 12:43:32 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:00.765 12:43:32 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:00.765 12:43:32 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:00.765 12:43:32 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:00.765 12:43:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:00.765 12:43:32 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:00.765 192.168.100.9' 00:16:00.765 12:43:32 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:00.765 192.168.100.9' 00:16:00.765 12:43:32 -- nvmf/common.sh@445 -- # head -n 1 00:16:00.765 12:43:32 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:00.765 12:43:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:00.765 192.168.100.9' 00:16:00.765 12:43:32 -- nvmf/common.sh@446 -- # tail -n +2 00:16:00.765 12:43:32 -- nvmf/common.sh@446 -- # head -n 1 00:16:00.765 12:43:32 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:00.765 12:43:32 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:00.766 12:43:32 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:00.766 12:43:32 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:00.766 12:43:32 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:00.766 12:43:32 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:00.766 12:43:32 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:16:00.766 12:43:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:00.766 12:43:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.766 12:43:32 -- common/autotest_common.sh@10 -- # set +x 00:16:00.766 12:43:32 -- nvmf/common.sh@469 -- # nvmfpid=468488 00:16:00.766 12:43:32 -- nvmf/common.sh@470 -- # waitforlisten 468488 00:16:00.766 12:43:32 -- common/autotest_common.sh@829 -- # '[' -z 468488 ']' 00:16:00.766 12:43:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.766 12:43:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.766 12:43:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.766 12:43:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.766 12:43:32 -- common/autotest_common.sh@10 -- # set +x 00:16:00.766 12:43:32 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:00.766 [2024-11-20 12:43:32.702665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:00.766 [2024-11-20 12:43:32.702719] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.766 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.766 [2024-11-20 12:43:32.782654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:00.766 [2024-11-20 12:43:32.872185] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:00.766 [2024-11-20 12:43:32.872351] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.766 [2024-11-20 12:43:32.872363] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.766 [2024-11-20 12:43:32.872373] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.766 [2024-11-20 12:43:32.872517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.766 [2024-11-20 12:43:32.872682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.766 [2024-11-20 12:43:32.872682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.766 12:43:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.766 12:43:33 -- common/autotest_common.sh@862 -- # return 0 00:16:00.766 12:43:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:00.766 12:43:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:00.766 12:43:33 -- common/autotest_common.sh@10 -- # set +x 00:16:00.766 12:43:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.766 12:43:33 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:16:00.766 12:43:33 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:00.766 [2024-11-20 12:43:33.688352] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x169cfa0/0x16a1490) succeed. 00:16:00.766 [2024-11-20 12:43:33.702224] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x169e4f0/0x16e2b30) succeed. 00:16:00.766 12:43:33 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:01.028 12:43:34 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:01.289 [2024-11-20 12:43:34.140127] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:01.290 12:43:34 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:01.290 12:43:34 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:01.551 Malloc0 00:16:01.551 12:43:34 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:01.551 Delay0 00:16:01.813 12:43:34 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.813 12:43:34 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:02.075 NULL1 00:16:02.075 12:43:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:02.336 12:43:35 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=469119 00:16:02.336 12:43:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:02.336 12:43:35 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:02.336 12:43:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.336 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.336 Read completed with error (sct=0, sc=11) 00:16:02.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.336 12:43:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.597 12:43:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:02.597 12:43:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:02.597 true 00:16:02.858 12:43:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:02.858 12:43:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.691 12:43:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:03.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.691 12:43:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:03.691 12:43:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:03.952 true 00:16:03.952 12:43:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:03.952 12:43:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.894 12:43:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:04.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.894 12:43:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:04.894 12:43:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:05.156 true 00:16:05.156 12:43:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:05.156 12:43:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.096 12:43:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:06.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.096 12:43:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:06.096 12:43:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:06.356 true 00:16:06.356 12:43:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:06.356 12:43:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.301 12:43:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:07.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.301 12:43:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:07.301 12:43:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:07.562 true 00:16:07.562 12:43:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:07.562 12:43:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.506 12:43:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:08.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.506 12:43:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:08.506 12:43:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:08.506 true 00:16:08.772 12:43:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:08.772 12:43:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.715 12:43:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:09.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.715 12:43:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:09.715 12:43:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:09.975 true 00:16:09.975 12:43:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:09.975 12:43:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.919 12:43:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:10.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.919 12:43:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:10.919 12:43:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:11.181 true 00:16:11.181 12:43:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:11.181 12:43:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.123 12:43:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.123 12:43:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:12.123 12:43:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:12.384 true 00:16:12.384 12:43:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:12.384 12:43:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.326 12:43:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:13.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.326 12:43:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:13.326 12:43:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:13.587 true 00:16:13.587 12:43:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:13.587 12:43:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.529 12:43:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:14.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.529 12:43:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:14.529 12:43:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:14.790 true 00:16:14.790 12:43:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:14.790 12:43:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.732 12:43:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.993 12:43:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:15.993 12:43:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:15.993 true 00:16:15.993 12:43:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:15.993 12:43:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.939 12:43:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:16.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.939 12:43:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:16.939 12:43:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:17.201 true 00:16:17.201 12:43:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:17.201 12:43:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.146 12:43:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:18.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.146 12:43:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:18.146 12:43:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:18.407 true 00:16:18.407 12:43:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:18.407 12:43:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.351 12:43:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:19.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.351 12:43:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:19.351 12:43:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:19.612 true 00:16:19.612 12:43:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:19.612 12:43:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:20.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.556 12:43:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:20.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.556 12:43:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:20.556 12:43:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:20.818 true 00:16:20.818 12:43:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:20.818 12:43:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:21.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.763 12:43:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:21.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.025 12:43:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:22.025 12:43:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:22.025 true 00:16:22.025 12:43:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:22.025 12:43:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.970 12:43:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.230 12:43:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:23.230 12:43:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:23.230 true 00:16:23.230 12:43:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:23.230 12:43:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.175 12:43:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.175 12:43:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:24.175 12:43:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:24.435 true 00:16:24.435 12:43:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:24.435 12:43:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.377 12:43:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:25.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.377 12:43:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:25.377 12:43:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:25.639 true 00:16:25.639 12:43:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:25.639 12:43:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.584 12:43:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.584 12:43:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:26.584 12:43:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:26.845 true 00:16:26.845 12:43:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:26.845 12:43:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.790 12:44:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:27.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.052 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.052 12:44:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:28.052 12:44:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:28.052 true 00:16:28.052 12:44:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:28.052 12:44:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.995 12:44:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.257 12:44:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:29.257 12:44:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:29.257 true 00:16:29.257 12:44:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:29.257 12:44:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.200 12:44:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.200 12:44:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:30.200 12:44:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:30.461 true 00:16:30.461 12:44:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:30.461 12:44:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.404 12:44:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.404 12:44:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:31.404 12:44:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:31.666 true 00:16:31.666 12:44:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:31.666 12:44:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.609 12:44:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.609 12:44:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:32.609 12:44:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:32.871 true 00:16:32.872 12:44:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:32.872 12:44:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.133 12:44:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.133 12:44:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:33.133 12:44:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:33.394 true 00:16:33.394 12:44:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:33.394 12:44:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.655 12:44:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.655 12:44:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:33.655 12:44:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:33.916 true 00:16:33.916 12:44:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:33.916 12:44:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.916 12:44:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:34.176 12:44:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:34.176 12:44:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:34.437 true 00:16:34.437 12:44:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:34.437 12:44:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.437 Initializing NVMe Controllers 00:16:34.437 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:34.437 Controller IO queue size 128, less than required. 00:16:34.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:34.437 Controller IO queue size 128, less than required. 00:16:34.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:34.437 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:34.437 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:34.437 Initialization complete. Launching workers. 00:16:34.437 ======================================================== 00:16:34.437 Latency(us) 00:16:34.437 Device Information : IOPS MiB/s Average min max 00:16:34.437 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7914.37 3.86 14277.06 1390.55 1185175.55 00:16:34.437 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41218.20 20.13 3105.16 1393.20 393892.39 00:16:34.437 ======================================================== 00:16:34.437 Total : 49132.57 23.99 4904.75 1390.55 1185175.55 00:16:34.437 00:16:34.437 12:44:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:34.698 12:44:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:16:34.698 12:44:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:16:34.959 true 00:16:34.959 12:44:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 469119 00:16:34.959 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (469119) - No such process 00:16:34.959 12:44:07 -- target/ns_hotplug_stress.sh@53 -- # wait 469119 00:16:34.959 12:44:07 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.959 12:44:08 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:35.220 12:44:08 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:35.220 12:44:08 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:35.220 12:44:08 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:35.220 12:44:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:35.220 12:44:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:35.480 null0 00:16:35.480 12:44:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:35.480 12:44:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:35.480 12:44:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:35.480 null1 00:16:35.480 12:44:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:35.480 12:44:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:35.480 12:44:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:35.740 null2 00:16:35.741 12:44:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:35.741 12:44:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:35.741 12:44:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:35.741 null3 00:16:36.001 12:44:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:36.001 12:44:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:36.001 12:44:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:36.001 null4 00:16:36.001 12:44:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:36.001 12:44:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:36.001 12:44:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:36.268 null5 00:16:36.268 12:44:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:36.268 12:44:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:36.268 12:44:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:36.268 null6 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:36.531 null7 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:36.531 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@66 -- # wait 476057 476058 476060 476062 476065 476067 476068 476070 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:36.532 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:36.793 12:44:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.793 12:44:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:36.793 12:44:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:36.793 12:44:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:36.793 12:44:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:36.793 12:44:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:36.793 12:44:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:36.793 12:44:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.055 12:44:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:37.055 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:37.316 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:37.316 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:37.316 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:37.316 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.316 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.316 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:37.316 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.316 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.316 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:37.316 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.317 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.578 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:37.839 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:38.100 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.100 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.100 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:38.100 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.100 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.100 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:38.100 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.100 12:44:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.100 12:44:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:38.100 12:44:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:38.100 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.361 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.622 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:38.884 12:44:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:39.145 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:39.426 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.426 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.426 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:39.426 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:39.426 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.426 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.426 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:39.426 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.427 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:39.691 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.960 12:44:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.960 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.960 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.960 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:39.960 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:40.235 12:44:13 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:40.235 12:44:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:40.235 12:44:13 -- nvmf/common.sh@116 -- # sync 00:16:40.235 12:44:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:40.235 12:44:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:40.235 12:44:13 -- nvmf/common.sh@119 -- # set +e 00:16:40.235 12:44:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:40.235 12:44:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:40.235 rmmod nvme_rdma 00:16:40.235 rmmod nvme_fabrics 00:16:40.235 12:44:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:40.524 12:44:13 -- nvmf/common.sh@123 -- # set -e 00:16:40.524 12:44:13 -- nvmf/common.sh@124 -- # return 0 00:16:40.524 12:44:13 -- nvmf/common.sh@477 -- # '[' -n 468488 ']' 00:16:40.524 12:44:13 -- nvmf/common.sh@478 -- # killprocess 468488 00:16:40.524 12:44:13 -- common/autotest_common.sh@936 -- # '[' -z 468488 ']' 00:16:40.524 12:44:13 -- common/autotest_common.sh@940 -- # kill -0 468488 00:16:40.524 12:44:13 -- common/autotest_common.sh@941 -- # uname 00:16:40.524 12:44:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.524 12:44:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 468488 00:16:40.524 12:44:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.524 12:44:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.524 12:44:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 468488' 00:16:40.524 killing process with pid 468488 00:16:40.524 12:44:13 -- common/autotest_common.sh@955 -- # kill 468488 00:16:40.525 12:44:13 -- common/autotest_common.sh@960 -- # wait 468488 00:16:40.525 12:44:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:40.525 12:44:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:40.525 00:16:40.525 real 0m48.256s 00:16:40.525 user 3m15.039s 00:16:40.525 sys 0m11.741s 00:16:40.525 12:44:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:40.525 12:44:13 -- common/autotest_common.sh@10 -- # set +x 00:16:40.525 ************************************ 00:16:40.525 END TEST nvmf_ns_hotplug_stress 00:16:40.525 ************************************ 00:16:40.525 12:44:13 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:40.525 12:44:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:40.525 12:44:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.525 12:44:13 -- common/autotest_common.sh@10 -- # set +x 00:16:40.525 ************************************ 00:16:40.525 START TEST nvmf_connect_stress 00:16:40.525 ************************************ 00:16:40.525 12:44:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:40.830 * Looking for test storage... 00:16:40.830 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:40.830 12:44:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:40.830 12:44:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:40.830 12:44:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:40.830 12:44:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:40.830 12:44:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:40.830 12:44:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:40.830 12:44:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:40.830 12:44:13 -- scripts/common.sh@335 -- # IFS=.-: 00:16:40.830 12:44:13 -- scripts/common.sh@335 -- # read -ra ver1 00:16:40.830 12:44:13 -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.830 12:44:13 -- scripts/common.sh@336 -- # read -ra ver2 00:16:40.830 12:44:13 -- scripts/common.sh@337 -- # local 'op=<' 00:16:40.830 12:44:13 -- scripts/common.sh@339 -- # ver1_l=2 00:16:40.830 12:44:13 -- scripts/common.sh@340 -- # ver2_l=1 00:16:40.830 12:44:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:40.830 12:44:13 -- scripts/common.sh@343 -- # case "$op" in 00:16:40.830 12:44:13 -- scripts/common.sh@344 -- # : 1 00:16:40.830 12:44:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:40.830 12:44:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.830 12:44:13 -- scripts/common.sh@364 -- # decimal 1 00:16:40.830 12:44:13 -- scripts/common.sh@352 -- # local d=1 00:16:40.830 12:44:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.830 12:44:13 -- scripts/common.sh@354 -- # echo 1 00:16:40.830 12:44:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:40.830 12:44:13 -- scripts/common.sh@365 -- # decimal 2 00:16:40.830 12:44:13 -- scripts/common.sh@352 -- # local d=2 00:16:40.830 12:44:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.830 12:44:13 -- scripts/common.sh@354 -- # echo 2 00:16:40.830 12:44:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:40.830 12:44:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:40.830 12:44:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:40.830 12:44:13 -- scripts/common.sh@367 -- # return 0 00:16:40.830 12:44:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.830 12:44:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:40.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.830 --rc genhtml_branch_coverage=1 00:16:40.830 --rc genhtml_function_coverage=1 00:16:40.830 --rc genhtml_legend=1 00:16:40.830 --rc geninfo_all_blocks=1 00:16:40.830 --rc geninfo_unexecuted_blocks=1 00:16:40.830 00:16:40.830 ' 00:16:40.830 12:44:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:40.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.830 --rc genhtml_branch_coverage=1 00:16:40.830 --rc genhtml_function_coverage=1 00:16:40.831 --rc genhtml_legend=1 00:16:40.831 --rc geninfo_all_blocks=1 00:16:40.831 --rc geninfo_unexecuted_blocks=1 00:16:40.831 00:16:40.831 ' 00:16:40.831 12:44:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:40.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.831 --rc genhtml_branch_coverage=1 00:16:40.831 --rc genhtml_function_coverage=1 00:16:40.831 --rc genhtml_legend=1 00:16:40.831 --rc geninfo_all_blocks=1 00:16:40.831 --rc geninfo_unexecuted_blocks=1 00:16:40.831 00:16:40.831 ' 00:16:40.831 12:44:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:40.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.831 --rc genhtml_branch_coverage=1 00:16:40.831 --rc genhtml_function_coverage=1 00:16:40.831 --rc genhtml_legend=1 00:16:40.831 --rc geninfo_all_blocks=1 00:16:40.831 --rc geninfo_unexecuted_blocks=1 00:16:40.831 00:16:40.831 ' 00:16:40.831 12:44:13 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.831 12:44:13 -- nvmf/common.sh@7 -- # uname -s 00:16:40.831 12:44:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.831 12:44:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.831 12:44:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.831 12:44:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.831 12:44:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.831 12:44:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.831 12:44:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.831 12:44:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.831 12:44:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.831 12:44:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.831 12:44:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:40.831 12:44:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:40.831 12:44:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.831 12:44:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.831 12:44:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.831 12:44:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:40.831 12:44:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.831 12:44:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.831 12:44:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.831 12:44:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.831 12:44:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.831 12:44:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.831 12:44:13 -- paths/export.sh@5 -- # export PATH 00:16:40.831 12:44:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.831 12:44:13 -- nvmf/common.sh@46 -- # : 0 00:16:40.831 12:44:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:40.831 12:44:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:40.831 12:44:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:40.831 12:44:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.831 12:44:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.831 12:44:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:40.831 12:44:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:40.831 12:44:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:40.831 12:44:13 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:40.831 12:44:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:40.831 12:44:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.831 12:44:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:40.831 12:44:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:40.831 12:44:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:40.831 12:44:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.831 12:44:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.831 12:44:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.831 12:44:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:40.831 12:44:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:40.831 12:44:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:40.831 12:44:13 -- common/autotest_common.sh@10 -- # set +x 00:16:47.684 12:44:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:47.684 12:44:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:47.684 12:44:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:47.684 12:44:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:47.684 12:44:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:47.684 12:44:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:47.684 12:44:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:47.684 12:44:20 -- nvmf/common.sh@294 -- # net_devs=() 00:16:47.684 12:44:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:47.684 12:44:20 -- nvmf/common.sh@295 -- # e810=() 00:16:47.684 12:44:20 -- nvmf/common.sh@295 -- # local -ga e810 00:16:47.684 12:44:20 -- nvmf/common.sh@296 -- # x722=() 00:16:47.684 12:44:20 -- nvmf/common.sh@296 -- # local -ga x722 00:16:47.684 12:44:20 -- nvmf/common.sh@297 -- # mlx=() 00:16:47.684 12:44:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:47.684 12:44:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.684 12:44:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:47.684 12:44:20 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:47.684 12:44:20 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:47.684 12:44:20 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:47.684 12:44:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:47.684 12:44:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:47.684 12:44:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:47.684 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:47.684 12:44:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.684 12:44:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:47.684 12:44:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:47.684 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:47.684 12:44:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.684 12:44:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:47.684 12:44:20 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:47.684 12:44:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.684 12:44:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:47.684 12:44:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.684 12:44:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:47.684 Found net devices under 0000:98:00.0: mlx_0_0 00:16:47.684 12:44:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.684 12:44:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:47.684 12:44:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.684 12:44:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:47.684 12:44:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.684 12:44:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:47.684 Found net devices under 0000:98:00.1: mlx_0_1 00:16:47.684 12:44:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.684 12:44:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:47.684 12:44:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:47.684 12:44:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:47.684 12:44:20 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:47.684 12:44:20 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:47.684 12:44:20 -- nvmf/common.sh@57 -- # uname 00:16:47.684 12:44:20 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:47.684 12:44:20 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:47.684 12:44:20 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:47.684 12:44:20 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:47.684 12:44:20 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:47.684 12:44:20 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:47.684 12:44:20 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:47.684 12:44:20 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:47.684 12:44:20 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:47.684 12:44:20 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:47.684 12:44:20 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:47.684 12:44:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:47.684 12:44:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:47.684 12:44:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:47.684 12:44:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:48.001 12:44:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:48.001 12:44:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.001 12:44:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.001 12:44:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:48.001 12:44:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:48.001 12:44:20 -- nvmf/common.sh@104 -- # continue 2 00:16:48.001 12:44:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.001 12:44:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.001 12:44:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:48.001 12:44:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.001 12:44:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:48.001 12:44:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:48.001 12:44:20 -- nvmf/common.sh@104 -- # continue 2 00:16:48.001 12:44:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:48.001 12:44:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:48.001 12:44:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.001 12:44:20 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:48.001 12:44:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:48.001 12:44:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:48.001 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:48.001 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:16:48.001 altname enp152s0f0np0 00:16:48.001 altname ens817f0np0 00:16:48.001 inet 192.168.100.8/24 scope global mlx_0_0 00:16:48.001 valid_lft forever preferred_lft forever 00:16:48.001 12:44:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:48.001 12:44:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:48.001 12:44:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.001 12:44:20 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:48.001 12:44:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:48.001 12:44:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:48.001 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:48.001 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:16:48.001 altname enp152s0f1np1 00:16:48.001 altname ens817f1np1 00:16:48.001 inet 192.168.100.9/24 scope global mlx_0_1 00:16:48.001 valid_lft forever preferred_lft forever 00:16:48.001 12:44:20 -- nvmf/common.sh@410 -- # return 0 00:16:48.001 12:44:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:48.001 12:44:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:48.001 12:44:20 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:48.001 12:44:20 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:48.001 12:44:20 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:48.001 12:44:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:48.001 12:44:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:48.001 12:44:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:48.001 12:44:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:48.001 12:44:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:48.001 12:44:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.001 12:44:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.001 12:44:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:48.001 12:44:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:48.001 12:44:20 -- nvmf/common.sh@104 -- # continue 2 00:16:48.001 12:44:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.001 12:44:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.001 12:44:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:48.001 12:44:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.001 12:44:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:48.001 12:44:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:48.001 12:44:20 -- nvmf/common.sh@104 -- # continue 2 00:16:48.001 12:44:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:48.001 12:44:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:48.001 12:44:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.001 12:44:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:48.001 12:44:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:48.001 12:44:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.001 12:44:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.001 12:44:20 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:48.001 192.168.100.9' 00:16:48.001 12:44:20 -- nvmf/common.sh@445 -- # head -n 1 00:16:48.001 12:44:20 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:48.001 192.168.100.9' 00:16:48.001 12:44:20 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:48.001 12:44:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:48.001 192.168.100.9' 00:16:48.001 12:44:20 -- nvmf/common.sh@446 -- # tail -n +2 00:16:48.001 12:44:20 -- nvmf/common.sh@446 -- # head -n 1 00:16:48.001 12:44:20 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:48.001 12:44:20 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:48.001 12:44:20 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:48.001 12:44:20 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:48.001 12:44:20 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:48.001 12:44:20 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:48.001 12:44:20 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:48.001 12:44:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.001 12:44:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.001 12:44:20 -- common/autotest_common.sh@10 -- # set +x 00:16:48.001 12:44:20 -- nvmf/common.sh@469 -- # nvmfpid=480650 00:16:48.001 12:44:20 -- nvmf/common.sh@470 -- # waitforlisten 480650 00:16:48.001 12:44:20 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:48.001 12:44:20 -- common/autotest_common.sh@829 -- # '[' -z 480650 ']' 00:16:48.001 12:44:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.001 12:44:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.001 12:44:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.001 12:44:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.001 12:44:20 -- common/autotest_common.sh@10 -- # set +x 00:16:48.001 [2024-11-20 12:44:20.986704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.001 [2024-11-20 12:44:20.986778] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.001 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.001 [2024-11-20 12:44:21.074291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:48.301 [2024-11-20 12:44:21.167438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.301 [2024-11-20 12:44:21.167611] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.301 [2024-11-20 12:44:21.167622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.301 [2024-11-20 12:44:21.167630] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.301 [2024-11-20 12:44:21.167774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.301 [2024-11-20 12:44:21.167938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.301 [2024-11-20 12:44:21.167939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.935 12:44:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.935 12:44:21 -- common/autotest_common.sh@862 -- # return 0 00:16:48.935 12:44:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:48.935 12:44:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.935 12:44:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.935 12:44:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.935 12:44:21 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:48.935 12:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.935 12:44:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.935 [2024-11-20 12:44:21.859313] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1426fa0/0x142b490) succeed. 00:16:48.935 [2024-11-20 12:44:21.873188] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14284f0/0x146cb30) succeed. 00:16:48.935 12:44:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.935 12:44:21 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:48.935 12:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.935 12:44:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.935 12:44:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.935 12:44:21 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:48.935 12:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.935 12:44:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.935 [2024-11-20 12:44:21.991182] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:48.935 12:44:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.935 12:44:21 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:48.935 12:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.935 12:44:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.935 NULL1 00:16:48.935 12:44:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.935 12:44:22 -- target/connect_stress.sh@21 -- # PERF_PID=481000 00:16:48.935 12:44:22 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:48.935 12:44:22 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:48.935 12:44:22 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:48.935 12:44:22 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:48.935 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:48.935 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:48.935 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:48.935 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:48.935 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:48.935 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.268 12:44:22 -- target/connect_stress.sh@28 -- # cat 00:16:49.268 12:44:22 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:49.268 12:44:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.268 12:44:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.268 12:44:22 -- common/autotest_common.sh@10 -- # set +x 00:16:49.584 12:44:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.584 12:44:22 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:49.584 12:44:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.584 12:44:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.584 12:44:22 -- common/autotest_common.sh@10 -- # set +x 00:16:49.868 12:44:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.868 12:44:22 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:49.868 12:44:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.868 12:44:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.868 12:44:22 -- common/autotest_common.sh@10 -- # set +x 00:16:50.167 12:44:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.167 12:44:23 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:50.167 12:44:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.167 12:44:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.167 12:44:23 -- common/autotest_common.sh@10 -- # set +x 00:16:50.472 12:44:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.472 12:44:23 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:50.472 12:44:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.472 12:44:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.472 12:44:23 -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 12:44:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.736 12:44:23 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:50.736 12:44:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.736 12:44:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.736 12:44:23 -- common/autotest_common.sh@10 -- # set +x 00:16:50.997 12:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.257 12:44:24 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:51.257 12:44:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.257 12:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.257 12:44:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.518 12:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.518 12:44:24 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:51.518 12:44:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.518 12:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.518 12:44:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.778 12:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.779 12:44:24 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:51.779 12:44:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.779 12:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.779 12:44:24 -- common/autotest_common.sh@10 -- # set +x 00:16:52.040 12:44:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.040 12:44:25 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:52.040 12:44:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.040 12:44:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.040 12:44:25 -- common/autotest_common.sh@10 -- # set +x 00:16:52.611 12:44:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.611 12:44:25 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:52.611 12:44:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.611 12:44:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.611 12:44:25 -- common/autotest_common.sh@10 -- # set +x 00:16:52.872 12:44:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.872 12:44:25 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:52.872 12:44:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.872 12:44:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.872 12:44:25 -- common/autotest_common.sh@10 -- # set +x 00:16:53.132 12:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.133 12:44:26 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:53.133 12:44:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.133 12:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.133 12:44:26 -- common/autotest_common.sh@10 -- # set +x 00:16:53.394 12:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.394 12:44:26 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:53.394 12:44:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.394 12:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.394 12:44:26 -- common/autotest_common.sh@10 -- # set +x 00:16:53.655 12:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.655 12:44:26 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:53.655 12:44:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.655 12:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.655 12:44:26 -- common/autotest_common.sh@10 -- # set +x 00:16:54.226 12:44:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.226 12:44:27 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:54.226 12:44:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.226 12:44:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.226 12:44:27 -- common/autotest_common.sh@10 -- # set +x 00:16:54.486 12:44:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.486 12:44:27 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:54.486 12:44:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.486 12:44:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.486 12:44:27 -- common/autotest_common.sh@10 -- # set +x 00:16:54.747 12:44:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.747 12:44:27 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:54.747 12:44:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.747 12:44:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.747 12:44:27 -- common/autotest_common.sh@10 -- # set +x 00:16:55.008 12:44:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.008 12:44:28 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:55.008 12:44:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.008 12:44:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.008 12:44:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 12:44:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.579 12:44:28 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:55.579 12:44:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.579 12:44:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.579 12:44:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.839 12:44:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.839 12:44:28 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:55.839 12:44:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.839 12:44:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.839 12:44:28 -- common/autotest_common.sh@10 -- # set +x 00:16:56.099 12:44:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.099 12:44:29 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:56.099 12:44:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.099 12:44:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.099 12:44:29 -- common/autotest_common.sh@10 -- # set +x 00:16:56.358 12:44:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.358 12:44:29 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:56.358 12:44:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.358 12:44:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.358 12:44:29 -- common/autotest_common.sh@10 -- # set +x 00:16:56.618 12:44:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.618 12:44:29 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:56.618 12:44:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.618 12:44:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.618 12:44:29 -- common/autotest_common.sh@10 -- # set +x 00:16:57.190 12:44:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.190 12:44:30 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:57.190 12:44:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.190 12:44:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.190 12:44:30 -- common/autotest_common.sh@10 -- # set +x 00:16:57.450 12:44:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.450 12:44:30 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:57.450 12:44:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.450 12:44:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.450 12:44:30 -- common/autotest_common.sh@10 -- # set +x 00:16:57.710 12:44:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.710 12:44:30 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:57.710 12:44:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.710 12:44:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.710 12:44:30 -- common/autotest_common.sh@10 -- # set +x 00:16:57.970 12:44:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.970 12:44:31 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:57.970 12:44:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.970 12:44:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.970 12:44:31 -- common/autotest_common.sh@10 -- # set +x 00:16:58.541 12:44:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.541 12:44:31 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:58.541 12:44:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.541 12:44:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.541 12:44:31 -- common/autotest_common.sh@10 -- # set +x 00:16:58.802 12:44:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.802 12:44:31 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:58.802 12:44:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.802 12:44:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.802 12:44:31 -- common/autotest_common.sh@10 -- # set +x 00:16:59.063 12:44:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.063 12:44:32 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:59.063 12:44:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.063 12:44:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.063 12:44:32 -- common/autotest_common.sh@10 -- # set +x 00:16:59.323 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:59.323 12:44:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.323 12:44:32 -- target/connect_stress.sh@34 -- # kill -0 481000 00:16:59.323 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (481000) - No such process 00:16:59.323 12:44:32 -- target/connect_stress.sh@38 -- # wait 481000 00:16:59.323 12:44:32 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:59.323 12:44:32 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:59.323 12:44:32 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:59.323 12:44:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:59.323 12:44:32 -- nvmf/common.sh@116 -- # sync 00:16:59.323 12:44:32 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:59.323 12:44:32 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:59.323 12:44:32 -- nvmf/common.sh@119 -- # set +e 00:16:59.323 12:44:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:59.323 12:44:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:59.323 rmmod nvme_rdma 00:16:59.323 rmmod nvme_fabrics 00:16:59.323 12:44:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:59.323 12:44:32 -- nvmf/common.sh@123 -- # set -e 00:16:59.323 12:44:32 -- nvmf/common.sh@124 -- # return 0 00:16:59.323 12:44:32 -- nvmf/common.sh@477 -- # '[' -n 480650 ']' 00:16:59.324 12:44:32 -- nvmf/common.sh@478 -- # killprocess 480650 00:16:59.324 12:44:32 -- common/autotest_common.sh@936 -- # '[' -z 480650 ']' 00:16:59.324 12:44:32 -- common/autotest_common.sh@940 -- # kill -0 480650 00:16:59.324 12:44:32 -- common/autotest_common.sh@941 -- # uname 00:16:59.324 12:44:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:59.324 12:44:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 480650 00:16:59.584 12:44:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:59.584 12:44:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:59.584 12:44:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 480650' 00:16:59.584 killing process with pid 480650 00:16:59.584 12:44:32 -- common/autotest_common.sh@955 -- # kill 480650 00:16:59.584 12:44:32 -- common/autotest_common.sh@960 -- # wait 480650 00:16:59.584 12:44:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:59.584 12:44:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:59.584 00:16:59.584 real 0m19.056s 00:16:59.584 user 0m41.883s 00:16:59.584 sys 0m6.936s 00:16:59.584 12:44:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:59.584 12:44:32 -- common/autotest_common.sh@10 -- # set +x 00:16:59.584 ************************************ 00:16:59.584 END TEST nvmf_connect_stress 00:16:59.584 ************************************ 00:16:59.584 12:44:32 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:59.584 12:44:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:59.584 12:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:59.584 12:44:32 -- common/autotest_common.sh@10 -- # set +x 00:16:59.846 ************************************ 00:16:59.846 START TEST nvmf_fused_ordering 00:16:59.846 ************************************ 00:16:59.846 12:44:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:59.846 * Looking for test storage... 00:16:59.846 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:59.846 12:44:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:59.846 12:44:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:59.846 12:44:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:59.846 12:44:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:59.846 12:44:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:59.846 12:44:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:59.846 12:44:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:59.846 12:44:32 -- scripts/common.sh@335 -- # IFS=.-: 00:16:59.846 12:44:32 -- scripts/common.sh@335 -- # read -ra ver1 00:16:59.846 12:44:32 -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.846 12:44:32 -- scripts/common.sh@336 -- # read -ra ver2 00:16:59.846 12:44:32 -- scripts/common.sh@337 -- # local 'op=<' 00:16:59.846 12:44:32 -- scripts/common.sh@339 -- # ver1_l=2 00:16:59.846 12:44:32 -- scripts/common.sh@340 -- # ver2_l=1 00:16:59.846 12:44:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:59.846 12:44:32 -- scripts/common.sh@343 -- # case "$op" in 00:16:59.846 12:44:32 -- scripts/common.sh@344 -- # : 1 00:16:59.846 12:44:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:59.846 12:44:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.846 12:44:32 -- scripts/common.sh@364 -- # decimal 1 00:16:59.846 12:44:32 -- scripts/common.sh@352 -- # local d=1 00:16:59.846 12:44:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.846 12:44:32 -- scripts/common.sh@354 -- # echo 1 00:16:59.846 12:44:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:59.846 12:44:32 -- scripts/common.sh@365 -- # decimal 2 00:16:59.846 12:44:32 -- scripts/common.sh@352 -- # local d=2 00:16:59.846 12:44:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.846 12:44:32 -- scripts/common.sh@354 -- # echo 2 00:16:59.846 12:44:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:59.846 12:44:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:59.846 12:44:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:59.846 12:44:32 -- scripts/common.sh@367 -- # return 0 00:16:59.846 12:44:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.846 12:44:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:59.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.846 --rc genhtml_branch_coverage=1 00:16:59.846 --rc genhtml_function_coverage=1 00:16:59.846 --rc genhtml_legend=1 00:16:59.846 --rc geninfo_all_blocks=1 00:16:59.846 --rc geninfo_unexecuted_blocks=1 00:16:59.846 00:16:59.846 ' 00:16:59.846 12:44:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:59.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.846 --rc genhtml_branch_coverage=1 00:16:59.846 --rc genhtml_function_coverage=1 00:16:59.846 --rc genhtml_legend=1 00:16:59.846 --rc geninfo_all_blocks=1 00:16:59.846 --rc geninfo_unexecuted_blocks=1 00:16:59.846 00:16:59.846 ' 00:16:59.846 12:44:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:59.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.846 --rc genhtml_branch_coverage=1 00:16:59.846 --rc genhtml_function_coverage=1 00:16:59.846 --rc genhtml_legend=1 00:16:59.846 --rc geninfo_all_blocks=1 00:16:59.846 --rc geninfo_unexecuted_blocks=1 00:16:59.846 00:16:59.846 ' 00:16:59.846 12:44:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:59.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.846 --rc genhtml_branch_coverage=1 00:16:59.846 --rc genhtml_function_coverage=1 00:16:59.846 --rc genhtml_legend=1 00:16:59.846 --rc geninfo_all_blocks=1 00:16:59.846 --rc geninfo_unexecuted_blocks=1 00:16:59.846 00:16:59.846 ' 00:16:59.846 12:44:32 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.846 12:44:32 -- nvmf/common.sh@7 -- # uname -s 00:16:59.846 12:44:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.846 12:44:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.846 12:44:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.846 12:44:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.846 12:44:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.846 12:44:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.846 12:44:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.846 12:44:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.846 12:44:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.846 12:44:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.846 12:44:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:59.846 12:44:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:59.846 12:44:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.846 12:44:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.846 12:44:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.846 12:44:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:59.846 12:44:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.846 12:44:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.846 12:44:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.846 12:44:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.846 12:44:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.847 12:44:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.847 12:44:32 -- paths/export.sh@5 -- # export PATH 00:16:59.847 12:44:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.847 12:44:32 -- nvmf/common.sh@46 -- # : 0 00:16:59.847 12:44:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:59.847 12:44:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:59.847 12:44:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:59.847 12:44:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.847 12:44:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.847 12:44:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:59.847 12:44:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:59.847 12:44:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:59.847 12:44:32 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:59.847 12:44:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:59.847 12:44:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.847 12:44:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:59.847 12:44:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:59.847 12:44:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:59.847 12:44:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.847 12:44:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.847 12:44:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.847 12:44:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:59.847 12:44:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:59.847 12:44:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:59.847 12:44:32 -- common/autotest_common.sh@10 -- # set +x 00:17:07.995 12:44:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:07.995 12:44:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:07.995 12:44:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:07.995 12:44:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:07.995 12:44:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:07.995 12:44:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:07.995 12:44:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:07.995 12:44:39 -- nvmf/common.sh@294 -- # net_devs=() 00:17:07.995 12:44:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:07.995 12:44:39 -- nvmf/common.sh@295 -- # e810=() 00:17:07.995 12:44:39 -- nvmf/common.sh@295 -- # local -ga e810 00:17:07.995 12:44:39 -- nvmf/common.sh@296 -- # x722=() 00:17:07.995 12:44:39 -- nvmf/common.sh@296 -- # local -ga x722 00:17:07.995 12:44:39 -- nvmf/common.sh@297 -- # mlx=() 00:17:07.995 12:44:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:07.995 12:44:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.995 12:44:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:07.995 12:44:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:07.995 12:44:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:07.995 12:44:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:07.995 12:44:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:07.995 12:44:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:07.995 12:44:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:17:07.995 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:17:07.995 12:44:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:07.995 12:44:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:07.995 12:44:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:17:07.995 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:17:07.995 12:44:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:07.995 12:44:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:07.995 12:44:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:07.995 12:44:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.995 12:44:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:07.995 12:44:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.995 12:44:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:17:07.995 Found net devices under 0000:98:00.0: mlx_0_0 00:17:07.995 12:44:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.995 12:44:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:07.995 12:44:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.995 12:44:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:07.995 12:44:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.995 12:44:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:17:07.995 Found net devices under 0000:98:00.1: mlx_0_1 00:17:07.995 12:44:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.995 12:44:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:07.995 12:44:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:07.995 12:44:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:07.995 12:44:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:07.995 12:44:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:07.995 12:44:39 -- nvmf/common.sh@57 -- # uname 00:17:07.995 12:44:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:07.995 12:44:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:07.995 12:44:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:07.995 12:44:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:07.995 12:44:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:07.995 12:44:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:07.995 12:44:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:07.995 12:44:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:07.995 12:44:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:07.995 12:44:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:07.995 12:44:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:07.995 12:44:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:07.995 12:44:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:07.995 12:44:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:07.995 12:44:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:07.995 12:44:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:07.995 12:44:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:07.995 12:44:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.995 12:44:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:07.995 12:44:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:07.995 12:44:40 -- nvmf/common.sh@104 -- # continue 2 00:17:07.995 12:44:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:07.995 12:44:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.995 12:44:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:07.995 12:44:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.995 12:44:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:07.995 12:44:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:07.995 12:44:40 -- nvmf/common.sh@104 -- # continue 2 00:17:07.995 12:44:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:07.995 12:44:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:07.995 12:44:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:07.995 12:44:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:07.995 12:44:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:07.995 12:44:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:07.995 12:44:40 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:07.995 12:44:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:07.995 12:44:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:07.995 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:07.995 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:17:07.995 altname enp152s0f0np0 00:17:07.995 altname ens817f0np0 00:17:07.995 inet 192.168.100.8/24 scope global mlx_0_0 00:17:07.995 valid_lft forever preferred_lft forever 00:17:07.995 12:44:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:07.995 12:44:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:07.995 12:44:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:07.995 12:44:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:07.995 12:44:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:07.995 12:44:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:07.995 12:44:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:07.995 12:44:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:07.995 12:44:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:07.995 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:07.995 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:17:07.996 altname enp152s0f1np1 00:17:07.996 altname ens817f1np1 00:17:07.996 inet 192.168.100.9/24 scope global mlx_0_1 00:17:07.996 valid_lft forever preferred_lft forever 00:17:07.996 12:44:40 -- nvmf/common.sh@410 -- # return 0 00:17:07.996 12:44:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:07.996 12:44:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:07.996 12:44:40 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:07.996 12:44:40 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:07.996 12:44:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:07.996 12:44:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:07.996 12:44:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:07.996 12:44:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:07.996 12:44:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:07.996 12:44:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:07.996 12:44:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:07.996 12:44:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.996 12:44:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:07.996 12:44:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:07.996 12:44:40 -- nvmf/common.sh@104 -- # continue 2 00:17:07.996 12:44:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:07.996 12:44:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.996 12:44:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:07.996 12:44:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.996 12:44:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:07.996 12:44:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:07.996 12:44:40 -- nvmf/common.sh@104 -- # continue 2 00:17:07.996 12:44:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:07.996 12:44:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:07.996 12:44:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:07.996 12:44:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:07.996 12:44:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:07.996 12:44:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:07.996 12:44:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:07.996 12:44:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:07.996 12:44:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:07.996 12:44:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:07.996 12:44:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:07.996 12:44:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:07.996 12:44:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:07.996 192.168.100.9' 00:17:07.996 12:44:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:07.996 192.168.100.9' 00:17:07.996 12:44:40 -- nvmf/common.sh@445 -- # head -n 1 00:17:07.996 12:44:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:07.996 12:44:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:07.996 192.168.100.9' 00:17:07.996 12:44:40 -- nvmf/common.sh@446 -- # tail -n +2 00:17:07.996 12:44:40 -- nvmf/common.sh@446 -- # head -n 1 00:17:07.996 12:44:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:07.996 12:44:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:07.996 12:44:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:07.996 12:44:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:07.996 12:44:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:07.996 12:44:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:07.996 12:44:40 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:07.996 12:44:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:07.996 12:44:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:07.996 12:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:07.996 12:44:40 -- nvmf/common.sh@469 -- # nvmfpid=486788 00:17:07.996 12:44:40 -- nvmf/common.sh@470 -- # waitforlisten 486788 00:17:07.996 12:44:40 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:07.996 12:44:40 -- common/autotest_common.sh@829 -- # '[' -z 486788 ']' 00:17:07.996 12:44:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.996 12:44:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.996 12:44:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.996 12:44:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.996 12:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:07.996 [2024-11-20 12:44:40.204095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.996 [2024-11-20 12:44:40.204161] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.996 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.996 [2024-11-20 12:44:40.287831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.996 [2024-11-20 12:44:40.379795] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:07.996 [2024-11-20 12:44:40.379946] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.996 [2024-11-20 12:44:40.379956] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.996 [2024-11-20 12:44:40.379965] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.996 [2024-11-20 12:44:40.379998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.996 12:44:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.996 12:44:41 -- common/autotest_common.sh@862 -- # return 0 00:17:07.996 12:44:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:07.996 12:44:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:07.996 12:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.996 12:44:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.996 12:44:41 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:07.996 12:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.996 12:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.996 [2024-11-20 12:44:41.088037] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12a5950/0x12a9e40) succeed. 00:17:08.258 [2024-11-20 12:44:41.101874] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12a6e50/0x12eb4e0) succeed. 00:17:08.258 12:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.258 12:44:41 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:08.258 12:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.258 12:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.258 12:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.258 12:44:41 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:08.258 12:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.258 12:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.258 [2024-11-20 12:44:41.171064] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:08.258 12:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.258 12:44:41 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:08.258 12:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.258 12:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.258 NULL1 00:17:08.258 12:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.258 12:44:41 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:08.258 12:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.258 12:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.258 12:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.258 12:44:41 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:08.258 12:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.258 12:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.258 12:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.258 12:44:41 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:08.258 [2024-11-20 12:44:41.240548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:08.258 [2024-11-20 12:44:41.240613] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487124 ] 00:17:08.258 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.519 Attached to nqn.2016-06.io.spdk:cnode1 00:17:08.519 Namespace ID: 1 size: 1GB 00:17:08.519 fused_ordering(0) 00:17:08.519 fused_ordering(1) 00:17:08.519 fused_ordering(2) 00:17:08.519 fused_ordering(3) 00:17:08.519 fused_ordering(4) 00:17:08.519 fused_ordering(5) 00:17:08.519 fused_ordering(6) 00:17:08.519 fused_ordering(7) 00:17:08.519 fused_ordering(8) 00:17:08.519 fused_ordering(9) 00:17:08.519 fused_ordering(10) 00:17:08.519 fused_ordering(11) 00:17:08.519 fused_ordering(12) 00:17:08.519 fused_ordering(13) 00:17:08.519 fused_ordering(14) 00:17:08.519 fused_ordering(15) 00:17:08.519 fused_ordering(16) 00:17:08.519 fused_ordering(17) 00:17:08.519 fused_ordering(18) 00:17:08.519 fused_ordering(19) 00:17:08.519 fused_ordering(20) 00:17:08.519 fused_ordering(21) 00:17:08.519 fused_ordering(22) 00:17:08.519 fused_ordering(23) 00:17:08.519 fused_ordering(24) 00:17:08.519 fused_ordering(25) 00:17:08.519 fused_ordering(26) 00:17:08.519 fused_ordering(27) 00:17:08.519 fused_ordering(28) 00:17:08.519 fused_ordering(29) 00:17:08.520 fused_ordering(30) 00:17:08.520 fused_ordering(31) 00:17:08.520 fused_ordering(32) 00:17:08.520 fused_ordering(33) 00:17:08.520 fused_ordering(34) 00:17:08.520 fused_ordering(35) 00:17:08.520 fused_ordering(36) 00:17:08.520 fused_ordering(37) 00:17:08.520 fused_ordering(38) 00:17:08.520 fused_ordering(39) 00:17:08.520 fused_ordering(40) 00:17:08.520 fused_ordering(41) 00:17:08.520 fused_ordering(42) 00:17:08.520 fused_ordering(43) 00:17:08.520 fused_ordering(44) 00:17:08.520 fused_ordering(45) 00:17:08.520 fused_ordering(46) 00:17:08.520 fused_ordering(47) 00:17:08.520 fused_ordering(48) 00:17:08.520 fused_ordering(49) 00:17:08.520 fused_ordering(50) 00:17:08.520 fused_ordering(51) 00:17:08.520 fused_ordering(52) 00:17:08.520 fused_ordering(53) 00:17:08.520 fused_ordering(54) 00:17:08.520 fused_ordering(55) 00:17:08.520 fused_ordering(56) 00:17:08.520 fused_ordering(57) 00:17:08.520 fused_ordering(58) 00:17:08.520 fused_ordering(59) 00:17:08.520 fused_ordering(60) 00:17:08.520 fused_ordering(61) 00:17:08.520 fused_ordering(62) 00:17:08.520 fused_ordering(63) 00:17:08.520 fused_ordering(64) 00:17:08.520 fused_ordering(65) 00:17:08.520 fused_ordering(66) 00:17:08.520 fused_ordering(67) 00:17:08.520 fused_ordering(68) 00:17:08.520 fused_ordering(69) 00:17:08.520 fused_ordering(70) 00:17:08.520 fused_ordering(71) 00:17:08.520 fused_ordering(72) 00:17:08.520 fused_ordering(73) 00:17:08.520 fused_ordering(74) 00:17:08.520 fused_ordering(75) 00:17:08.520 fused_ordering(76) 00:17:08.520 fused_ordering(77) 00:17:08.520 fused_ordering(78) 00:17:08.520 fused_ordering(79) 00:17:08.520 fused_ordering(80) 00:17:08.520 fused_ordering(81) 00:17:08.520 fused_ordering(82) 00:17:08.520 fused_ordering(83) 00:17:08.520 fused_ordering(84) 00:17:08.520 fused_ordering(85) 00:17:08.520 fused_ordering(86) 00:17:08.520 fused_ordering(87) 00:17:08.520 fused_ordering(88) 00:17:08.520 fused_ordering(89) 00:17:08.520 fused_ordering(90) 00:17:08.520 fused_ordering(91) 00:17:08.520 fused_ordering(92) 00:17:08.520 fused_ordering(93) 00:17:08.520 fused_ordering(94) 00:17:08.520 fused_ordering(95) 00:17:08.520 fused_ordering(96) 00:17:08.520 fused_ordering(97) 00:17:08.520 fused_ordering(98) 00:17:08.520 fused_ordering(99) 00:17:08.520 fused_ordering(100) 00:17:08.520 fused_ordering(101) 00:17:08.520 fused_ordering(102) 00:17:08.520 fused_ordering(103) 00:17:08.520 fused_ordering(104) 00:17:08.520 fused_ordering(105) 00:17:08.520 fused_ordering(106) 00:17:08.520 fused_ordering(107) 00:17:08.520 fused_ordering(108) 00:17:08.520 fused_ordering(109) 00:17:08.520 fused_ordering(110) 00:17:08.520 fused_ordering(111) 00:17:08.520 fused_ordering(112) 00:17:08.520 fused_ordering(113) 00:17:08.520 fused_ordering(114) 00:17:08.520 fused_ordering(115) 00:17:08.520 fused_ordering(116) 00:17:08.520 fused_ordering(117) 00:17:08.520 fused_ordering(118) 00:17:08.520 fused_ordering(119) 00:17:08.520 fused_ordering(120) 00:17:08.520 fused_ordering(121) 00:17:08.520 fused_ordering(122) 00:17:08.520 fused_ordering(123) 00:17:08.520 fused_ordering(124) 00:17:08.520 fused_ordering(125) 00:17:08.520 fused_ordering(126) 00:17:08.520 fused_ordering(127) 00:17:08.520 fused_ordering(128) 00:17:08.520 fused_ordering(129) 00:17:08.520 fused_ordering(130) 00:17:08.520 fused_ordering(131) 00:17:08.520 fused_ordering(132) 00:17:08.520 fused_ordering(133) 00:17:08.520 fused_ordering(134) 00:17:08.520 fused_ordering(135) 00:17:08.520 fused_ordering(136) 00:17:08.520 fused_ordering(137) 00:17:08.520 fused_ordering(138) 00:17:08.520 fused_ordering(139) 00:17:08.520 fused_ordering(140) 00:17:08.520 fused_ordering(141) 00:17:08.520 fused_ordering(142) 00:17:08.520 fused_ordering(143) 00:17:08.520 fused_ordering(144) 00:17:08.520 fused_ordering(145) 00:17:08.520 fused_ordering(146) 00:17:08.520 fused_ordering(147) 00:17:08.520 fused_ordering(148) 00:17:08.520 fused_ordering(149) 00:17:08.520 fused_ordering(150) 00:17:08.520 fused_ordering(151) 00:17:08.520 fused_ordering(152) 00:17:08.520 fused_ordering(153) 00:17:08.520 fused_ordering(154) 00:17:08.520 fused_ordering(155) 00:17:08.520 fused_ordering(156) 00:17:08.520 fused_ordering(157) 00:17:08.520 fused_ordering(158) 00:17:08.520 fused_ordering(159) 00:17:08.520 fused_ordering(160) 00:17:08.520 fused_ordering(161) 00:17:08.520 fused_ordering(162) 00:17:08.520 fused_ordering(163) 00:17:08.520 fused_ordering(164) 00:17:08.520 fused_ordering(165) 00:17:08.520 fused_ordering(166) 00:17:08.520 fused_ordering(167) 00:17:08.520 fused_ordering(168) 00:17:08.520 fused_ordering(169) 00:17:08.520 fused_ordering(170) 00:17:08.520 fused_ordering(171) 00:17:08.520 fused_ordering(172) 00:17:08.520 fused_ordering(173) 00:17:08.520 fused_ordering(174) 00:17:08.520 fused_ordering(175) 00:17:08.520 fused_ordering(176) 00:17:08.520 fused_ordering(177) 00:17:08.520 fused_ordering(178) 00:17:08.520 fused_ordering(179) 00:17:08.520 fused_ordering(180) 00:17:08.520 fused_ordering(181) 00:17:08.520 fused_ordering(182) 00:17:08.520 fused_ordering(183) 00:17:08.520 fused_ordering(184) 00:17:08.520 fused_ordering(185) 00:17:08.520 fused_ordering(186) 00:17:08.520 fused_ordering(187) 00:17:08.520 fused_ordering(188) 00:17:08.520 fused_ordering(189) 00:17:08.520 fused_ordering(190) 00:17:08.520 fused_ordering(191) 00:17:08.520 fused_ordering(192) 00:17:08.520 fused_ordering(193) 00:17:08.520 fused_ordering(194) 00:17:08.520 fused_ordering(195) 00:17:08.520 fused_ordering(196) 00:17:08.520 fused_ordering(197) 00:17:08.520 fused_ordering(198) 00:17:08.520 fused_ordering(199) 00:17:08.520 fused_ordering(200) 00:17:08.520 fused_ordering(201) 00:17:08.520 fused_ordering(202) 00:17:08.520 fused_ordering(203) 00:17:08.520 fused_ordering(204) 00:17:08.520 fused_ordering(205) 00:17:08.520 fused_ordering(206) 00:17:08.520 fused_ordering(207) 00:17:08.520 fused_ordering(208) 00:17:08.520 fused_ordering(209) 00:17:08.520 fused_ordering(210) 00:17:08.520 fused_ordering(211) 00:17:08.520 fused_ordering(212) 00:17:08.520 fused_ordering(213) 00:17:08.520 fused_ordering(214) 00:17:08.520 fused_ordering(215) 00:17:08.520 fused_ordering(216) 00:17:08.520 fused_ordering(217) 00:17:08.520 fused_ordering(218) 00:17:08.520 fused_ordering(219) 00:17:08.520 fused_ordering(220) 00:17:08.520 fused_ordering(221) 00:17:08.520 fused_ordering(222) 00:17:08.520 fused_ordering(223) 00:17:08.520 fused_ordering(224) 00:17:08.520 fused_ordering(225) 00:17:08.520 fused_ordering(226) 00:17:08.520 fused_ordering(227) 00:17:08.520 fused_ordering(228) 00:17:08.520 fused_ordering(229) 00:17:08.520 fused_ordering(230) 00:17:08.520 fused_ordering(231) 00:17:08.520 fused_ordering(232) 00:17:08.520 fused_ordering(233) 00:17:08.520 fused_ordering(234) 00:17:08.520 fused_ordering(235) 00:17:08.520 fused_ordering(236) 00:17:08.520 fused_ordering(237) 00:17:08.520 fused_ordering(238) 00:17:08.520 fused_ordering(239) 00:17:08.520 fused_ordering(240) 00:17:08.520 fused_ordering(241) 00:17:08.520 fused_ordering(242) 00:17:08.520 fused_ordering(243) 00:17:08.520 fused_ordering(244) 00:17:08.520 fused_ordering(245) 00:17:08.520 fused_ordering(246) 00:17:08.520 fused_ordering(247) 00:17:08.520 fused_ordering(248) 00:17:08.520 fused_ordering(249) 00:17:08.520 fused_ordering(250) 00:17:08.520 fused_ordering(251) 00:17:08.520 fused_ordering(252) 00:17:08.520 fused_ordering(253) 00:17:08.520 fused_ordering(254) 00:17:08.520 fused_ordering(255) 00:17:08.520 fused_ordering(256) 00:17:08.520 fused_ordering(257) 00:17:08.520 fused_ordering(258) 00:17:08.520 fused_ordering(259) 00:17:08.520 fused_ordering(260) 00:17:08.520 fused_ordering(261) 00:17:08.520 fused_ordering(262) 00:17:08.520 fused_ordering(263) 00:17:08.520 fused_ordering(264) 00:17:08.520 fused_ordering(265) 00:17:08.520 fused_ordering(266) 00:17:08.520 fused_ordering(267) 00:17:08.520 fused_ordering(268) 00:17:08.520 fused_ordering(269) 00:17:08.520 fused_ordering(270) 00:17:08.520 fused_ordering(271) 00:17:08.520 fused_ordering(272) 00:17:08.520 fused_ordering(273) 00:17:08.520 fused_ordering(274) 00:17:08.520 fused_ordering(275) 00:17:08.520 fused_ordering(276) 00:17:08.520 fused_ordering(277) 00:17:08.520 fused_ordering(278) 00:17:08.520 fused_ordering(279) 00:17:08.520 fused_ordering(280) 00:17:08.520 fused_ordering(281) 00:17:08.520 fused_ordering(282) 00:17:08.520 fused_ordering(283) 00:17:08.520 fused_ordering(284) 00:17:08.520 fused_ordering(285) 00:17:08.520 fused_ordering(286) 00:17:08.520 fused_ordering(287) 00:17:08.520 fused_ordering(288) 00:17:08.520 fused_ordering(289) 00:17:08.520 fused_ordering(290) 00:17:08.520 fused_ordering(291) 00:17:08.520 fused_ordering(292) 00:17:08.520 fused_ordering(293) 00:17:08.520 fused_ordering(294) 00:17:08.520 fused_ordering(295) 00:17:08.520 fused_ordering(296) 00:17:08.520 fused_ordering(297) 00:17:08.520 fused_ordering(298) 00:17:08.520 fused_ordering(299) 00:17:08.520 fused_ordering(300) 00:17:08.520 fused_ordering(301) 00:17:08.520 fused_ordering(302) 00:17:08.521 fused_ordering(303) 00:17:08.521 fused_ordering(304) 00:17:08.521 fused_ordering(305) 00:17:08.521 fused_ordering(306) 00:17:08.521 fused_ordering(307) 00:17:08.521 fused_ordering(308) 00:17:08.521 fused_ordering(309) 00:17:08.521 fused_ordering(310) 00:17:08.521 fused_ordering(311) 00:17:08.521 fused_ordering(312) 00:17:08.521 fused_ordering(313) 00:17:08.521 fused_ordering(314) 00:17:08.521 fused_ordering(315) 00:17:08.521 fused_ordering(316) 00:17:08.521 fused_ordering(317) 00:17:08.521 fused_ordering(318) 00:17:08.521 fused_ordering(319) 00:17:08.521 fused_ordering(320) 00:17:08.521 fused_ordering(321) 00:17:08.521 fused_ordering(322) 00:17:08.521 fused_ordering(323) 00:17:08.521 fused_ordering(324) 00:17:08.521 fused_ordering(325) 00:17:08.521 fused_ordering(326) 00:17:08.521 fused_ordering(327) 00:17:08.521 fused_ordering(328) 00:17:08.521 fused_ordering(329) 00:17:08.521 fused_ordering(330) 00:17:08.521 fused_ordering(331) 00:17:08.521 fused_ordering(332) 00:17:08.521 fused_ordering(333) 00:17:08.521 fused_ordering(334) 00:17:08.521 fused_ordering(335) 00:17:08.521 fused_ordering(336) 00:17:08.521 fused_ordering(337) 00:17:08.521 fused_ordering(338) 00:17:08.521 fused_ordering(339) 00:17:08.521 fused_ordering(340) 00:17:08.521 fused_ordering(341) 00:17:08.521 fused_ordering(342) 00:17:08.521 fused_ordering(343) 00:17:08.521 fused_ordering(344) 00:17:08.521 fused_ordering(345) 00:17:08.521 fused_ordering(346) 00:17:08.521 fused_ordering(347) 00:17:08.521 fused_ordering(348) 00:17:08.521 fused_ordering(349) 00:17:08.521 fused_ordering(350) 00:17:08.521 fused_ordering(351) 00:17:08.521 fused_ordering(352) 00:17:08.521 fused_ordering(353) 00:17:08.521 fused_ordering(354) 00:17:08.521 fused_ordering(355) 00:17:08.521 fused_ordering(356) 00:17:08.521 fused_ordering(357) 00:17:08.521 fused_ordering(358) 00:17:08.521 fused_ordering(359) 00:17:08.521 fused_ordering(360) 00:17:08.521 fused_ordering(361) 00:17:08.521 fused_ordering(362) 00:17:08.521 fused_ordering(363) 00:17:08.521 fused_ordering(364) 00:17:08.521 fused_ordering(365) 00:17:08.521 fused_ordering(366) 00:17:08.521 fused_ordering(367) 00:17:08.521 fused_ordering(368) 00:17:08.521 fused_ordering(369) 00:17:08.521 fused_ordering(370) 00:17:08.521 fused_ordering(371) 00:17:08.521 fused_ordering(372) 00:17:08.521 fused_ordering(373) 00:17:08.521 fused_ordering(374) 00:17:08.521 fused_ordering(375) 00:17:08.521 fused_ordering(376) 00:17:08.521 fused_ordering(377) 00:17:08.521 fused_ordering(378) 00:17:08.521 fused_ordering(379) 00:17:08.521 fused_ordering(380) 00:17:08.521 fused_ordering(381) 00:17:08.521 fused_ordering(382) 00:17:08.521 fused_ordering(383) 00:17:08.521 fused_ordering(384) 00:17:08.521 fused_ordering(385) 00:17:08.521 fused_ordering(386) 00:17:08.521 fused_ordering(387) 00:17:08.521 fused_ordering(388) 00:17:08.521 fused_ordering(389) 00:17:08.521 fused_ordering(390) 00:17:08.521 fused_ordering(391) 00:17:08.521 fused_ordering(392) 00:17:08.521 fused_ordering(393) 00:17:08.521 fused_ordering(394) 00:17:08.521 fused_ordering(395) 00:17:08.521 fused_ordering(396) 00:17:08.521 fused_ordering(397) 00:17:08.521 fused_ordering(398) 00:17:08.521 fused_ordering(399) 00:17:08.521 fused_ordering(400) 00:17:08.521 fused_ordering(401) 00:17:08.521 fused_ordering(402) 00:17:08.521 fused_ordering(403) 00:17:08.521 fused_ordering(404) 00:17:08.521 fused_ordering(405) 00:17:08.521 fused_ordering(406) 00:17:08.521 fused_ordering(407) 00:17:08.521 fused_ordering(408) 00:17:08.521 fused_ordering(409) 00:17:08.521 fused_ordering(410) 00:17:08.782 fused_ordering(411) 00:17:08.782 fused_ordering(412) 00:17:08.782 fused_ordering(413) 00:17:08.782 fused_ordering(414) 00:17:08.782 fused_ordering(415) 00:17:08.782 fused_ordering(416) 00:17:08.782 fused_ordering(417) 00:17:08.782 fused_ordering(418) 00:17:08.782 fused_ordering(419) 00:17:08.783 fused_ordering(420) 00:17:08.783 fused_ordering(421) 00:17:08.783 fused_ordering(422) 00:17:08.783 fused_ordering(423) 00:17:08.783 fused_ordering(424) 00:17:08.783 fused_ordering(425) 00:17:08.783 fused_ordering(426) 00:17:08.783 fused_ordering(427) 00:17:08.783 fused_ordering(428) 00:17:08.783 fused_ordering(429) 00:17:08.783 fused_ordering(430) 00:17:08.783 fused_ordering(431) 00:17:08.783 fused_ordering(432) 00:17:08.783 fused_ordering(433) 00:17:08.783 fused_ordering(434) 00:17:08.783 fused_ordering(435) 00:17:08.783 fused_ordering(436) 00:17:08.783 fused_ordering(437) 00:17:08.783 fused_ordering(438) 00:17:08.783 fused_ordering(439) 00:17:08.783 fused_ordering(440) 00:17:08.783 fused_ordering(441) 00:17:08.783 fused_ordering(442) 00:17:08.783 fused_ordering(443) 00:17:08.783 fused_ordering(444) 00:17:08.783 fused_ordering(445) 00:17:08.783 fused_ordering(446) 00:17:08.783 fused_ordering(447) 00:17:08.783 fused_ordering(448) 00:17:08.783 fused_ordering(449) 00:17:08.783 fused_ordering(450) 00:17:08.783 fused_ordering(451) 00:17:08.783 fused_ordering(452) 00:17:08.783 fused_ordering(453) 00:17:08.783 fused_ordering(454) 00:17:08.783 fused_ordering(455) 00:17:08.783 fused_ordering(456) 00:17:08.783 fused_ordering(457) 00:17:08.783 fused_ordering(458) 00:17:08.783 fused_ordering(459) 00:17:08.783 fused_ordering(460) 00:17:08.783 fused_ordering(461) 00:17:08.783 fused_ordering(462) 00:17:08.783 fused_ordering(463) 00:17:08.783 fused_ordering(464) 00:17:08.783 fused_ordering(465) 00:17:08.783 fused_ordering(466) 00:17:08.783 fused_ordering(467) 00:17:08.783 fused_ordering(468) 00:17:08.783 fused_ordering(469) 00:17:08.783 fused_ordering(470) 00:17:08.783 fused_ordering(471) 00:17:08.783 fused_ordering(472) 00:17:08.783 fused_ordering(473) 00:17:08.783 fused_ordering(474) 00:17:08.783 fused_ordering(475) 00:17:08.783 fused_ordering(476) 00:17:08.783 fused_ordering(477) 00:17:08.783 fused_ordering(478) 00:17:08.783 fused_ordering(479) 00:17:08.783 fused_ordering(480) 00:17:08.783 fused_ordering(481) 00:17:08.783 fused_ordering(482) 00:17:08.783 fused_ordering(483) 00:17:08.783 fused_ordering(484) 00:17:08.783 fused_ordering(485) 00:17:08.783 fused_ordering(486) 00:17:08.783 fused_ordering(487) 00:17:08.783 fused_ordering(488) 00:17:08.783 fused_ordering(489) 00:17:08.783 fused_ordering(490) 00:17:08.783 fused_ordering(491) 00:17:08.783 fused_ordering(492) 00:17:08.783 fused_ordering(493) 00:17:08.783 fused_ordering(494) 00:17:08.783 fused_ordering(495) 00:17:08.783 fused_ordering(496) 00:17:08.783 fused_ordering(497) 00:17:08.783 fused_ordering(498) 00:17:08.783 fused_ordering(499) 00:17:08.783 fused_ordering(500) 00:17:08.783 fused_ordering(501) 00:17:08.783 fused_ordering(502) 00:17:08.783 fused_ordering(503) 00:17:08.783 fused_ordering(504) 00:17:08.783 fused_ordering(505) 00:17:08.783 fused_ordering(506) 00:17:08.783 fused_ordering(507) 00:17:08.783 fused_ordering(508) 00:17:08.783 fused_ordering(509) 00:17:08.783 fused_ordering(510) 00:17:08.783 fused_ordering(511) 00:17:08.783 fused_ordering(512) 00:17:08.783 fused_ordering(513) 00:17:08.783 fused_ordering(514) 00:17:08.783 fused_ordering(515) 00:17:08.783 fused_ordering(516) 00:17:08.783 fused_ordering(517) 00:17:08.783 fused_ordering(518) 00:17:08.783 fused_ordering(519) 00:17:08.783 fused_ordering(520) 00:17:08.783 fused_ordering(521) 00:17:08.783 fused_ordering(522) 00:17:08.783 fused_ordering(523) 00:17:08.783 fused_ordering(524) 00:17:08.783 fused_ordering(525) 00:17:08.783 fused_ordering(526) 00:17:08.783 fused_ordering(527) 00:17:08.783 fused_ordering(528) 00:17:08.783 fused_ordering(529) 00:17:08.783 fused_ordering(530) 00:17:08.783 fused_ordering(531) 00:17:08.783 fused_ordering(532) 00:17:08.783 fused_ordering(533) 00:17:08.783 fused_ordering(534) 00:17:08.783 fused_ordering(535) 00:17:08.783 fused_ordering(536) 00:17:08.783 fused_ordering(537) 00:17:08.783 fused_ordering(538) 00:17:08.783 fused_ordering(539) 00:17:08.783 fused_ordering(540) 00:17:08.783 fused_ordering(541) 00:17:08.783 fused_ordering(542) 00:17:08.783 fused_ordering(543) 00:17:08.783 fused_ordering(544) 00:17:08.783 fused_ordering(545) 00:17:08.783 fused_ordering(546) 00:17:08.783 fused_ordering(547) 00:17:08.783 fused_ordering(548) 00:17:08.783 fused_ordering(549) 00:17:08.783 fused_ordering(550) 00:17:08.783 fused_ordering(551) 00:17:08.783 fused_ordering(552) 00:17:08.783 fused_ordering(553) 00:17:08.783 fused_ordering(554) 00:17:08.783 fused_ordering(555) 00:17:08.783 fused_ordering(556) 00:17:08.783 fused_ordering(557) 00:17:08.783 fused_ordering(558) 00:17:08.783 fused_ordering(559) 00:17:08.783 fused_ordering(560) 00:17:08.783 fused_ordering(561) 00:17:08.783 fused_ordering(562) 00:17:08.783 fused_ordering(563) 00:17:08.783 fused_ordering(564) 00:17:08.783 fused_ordering(565) 00:17:08.783 fused_ordering(566) 00:17:08.783 fused_ordering(567) 00:17:08.783 fused_ordering(568) 00:17:08.783 fused_ordering(569) 00:17:08.783 fused_ordering(570) 00:17:08.783 fused_ordering(571) 00:17:08.783 fused_ordering(572) 00:17:08.783 fused_ordering(573) 00:17:08.783 fused_ordering(574) 00:17:08.783 fused_ordering(575) 00:17:08.783 fused_ordering(576) 00:17:08.783 fused_ordering(577) 00:17:08.783 fused_ordering(578) 00:17:08.783 fused_ordering(579) 00:17:08.783 fused_ordering(580) 00:17:08.783 fused_ordering(581) 00:17:08.783 fused_ordering(582) 00:17:08.783 fused_ordering(583) 00:17:08.783 fused_ordering(584) 00:17:08.783 fused_ordering(585) 00:17:08.783 fused_ordering(586) 00:17:08.783 fused_ordering(587) 00:17:08.783 fused_ordering(588) 00:17:08.783 fused_ordering(589) 00:17:08.783 fused_ordering(590) 00:17:08.783 fused_ordering(591) 00:17:08.783 fused_ordering(592) 00:17:08.783 fused_ordering(593) 00:17:08.783 fused_ordering(594) 00:17:08.783 fused_ordering(595) 00:17:08.783 fused_ordering(596) 00:17:08.783 fused_ordering(597) 00:17:08.783 fused_ordering(598) 00:17:08.783 fused_ordering(599) 00:17:08.783 fused_ordering(600) 00:17:08.783 fused_ordering(601) 00:17:08.783 fused_ordering(602) 00:17:08.783 fused_ordering(603) 00:17:08.783 fused_ordering(604) 00:17:08.783 fused_ordering(605) 00:17:08.783 fused_ordering(606) 00:17:08.783 fused_ordering(607) 00:17:08.783 fused_ordering(608) 00:17:08.783 fused_ordering(609) 00:17:08.783 fused_ordering(610) 00:17:08.783 fused_ordering(611) 00:17:08.783 fused_ordering(612) 00:17:08.783 fused_ordering(613) 00:17:08.783 fused_ordering(614) 00:17:08.783 fused_ordering(615) 00:17:08.783 fused_ordering(616) 00:17:08.783 fused_ordering(617) 00:17:08.783 fused_ordering(618) 00:17:08.783 fused_ordering(619) 00:17:08.783 fused_ordering(620) 00:17:08.783 fused_ordering(621) 00:17:08.783 fused_ordering(622) 00:17:08.783 fused_ordering(623) 00:17:08.783 fused_ordering(624) 00:17:08.783 fused_ordering(625) 00:17:08.783 fused_ordering(626) 00:17:08.783 fused_ordering(627) 00:17:08.783 fused_ordering(628) 00:17:08.783 fused_ordering(629) 00:17:08.783 fused_ordering(630) 00:17:08.783 fused_ordering(631) 00:17:08.783 fused_ordering(632) 00:17:08.783 fused_ordering(633) 00:17:08.783 fused_ordering(634) 00:17:08.783 fused_ordering(635) 00:17:08.783 fused_ordering(636) 00:17:08.783 fused_ordering(637) 00:17:08.783 fused_ordering(638) 00:17:08.783 fused_ordering(639) 00:17:08.783 fused_ordering(640) 00:17:08.783 fused_ordering(641) 00:17:08.783 fused_ordering(642) 00:17:08.783 fused_ordering(643) 00:17:08.783 fused_ordering(644) 00:17:08.783 fused_ordering(645) 00:17:08.783 fused_ordering(646) 00:17:08.783 fused_ordering(647) 00:17:08.783 fused_ordering(648) 00:17:08.783 fused_ordering(649) 00:17:08.783 fused_ordering(650) 00:17:08.783 fused_ordering(651) 00:17:08.783 fused_ordering(652) 00:17:08.783 fused_ordering(653) 00:17:08.783 fused_ordering(654) 00:17:08.783 fused_ordering(655) 00:17:08.783 fused_ordering(656) 00:17:08.783 fused_ordering(657) 00:17:08.783 fused_ordering(658) 00:17:08.783 fused_ordering(659) 00:17:08.783 fused_ordering(660) 00:17:08.783 fused_ordering(661) 00:17:08.783 fused_ordering(662) 00:17:08.783 fused_ordering(663) 00:17:08.783 fused_ordering(664) 00:17:08.783 fused_ordering(665) 00:17:08.783 fused_ordering(666) 00:17:08.783 fused_ordering(667) 00:17:08.783 fused_ordering(668) 00:17:08.783 fused_ordering(669) 00:17:08.783 fused_ordering(670) 00:17:08.783 fused_ordering(671) 00:17:08.783 fused_ordering(672) 00:17:08.783 fused_ordering(673) 00:17:08.783 fused_ordering(674) 00:17:08.783 fused_ordering(675) 00:17:08.783 fused_ordering(676) 00:17:08.783 fused_ordering(677) 00:17:08.783 fused_ordering(678) 00:17:08.783 fused_ordering(679) 00:17:08.783 fused_ordering(680) 00:17:08.783 fused_ordering(681) 00:17:08.783 fused_ordering(682) 00:17:08.783 fused_ordering(683) 00:17:08.783 fused_ordering(684) 00:17:08.783 fused_ordering(685) 00:17:08.783 fused_ordering(686) 00:17:08.783 fused_ordering(687) 00:17:08.784 fused_ordering(688) 00:17:08.784 fused_ordering(689) 00:17:08.784 fused_ordering(690) 00:17:08.784 fused_ordering(691) 00:17:08.784 fused_ordering(692) 00:17:08.784 fused_ordering(693) 00:17:08.784 fused_ordering(694) 00:17:08.784 fused_ordering(695) 00:17:08.784 fused_ordering(696) 00:17:08.784 fused_ordering(697) 00:17:08.784 fused_ordering(698) 00:17:08.784 fused_ordering(699) 00:17:08.784 fused_ordering(700) 00:17:08.784 fused_ordering(701) 00:17:08.784 fused_ordering(702) 00:17:08.784 fused_ordering(703) 00:17:08.784 fused_ordering(704) 00:17:08.784 fused_ordering(705) 00:17:08.784 fused_ordering(706) 00:17:08.784 fused_ordering(707) 00:17:08.784 fused_ordering(708) 00:17:08.784 fused_ordering(709) 00:17:08.784 fused_ordering(710) 00:17:08.784 fused_ordering(711) 00:17:08.784 fused_ordering(712) 00:17:08.784 fused_ordering(713) 00:17:08.784 fused_ordering(714) 00:17:08.784 fused_ordering(715) 00:17:08.784 fused_ordering(716) 00:17:08.784 fused_ordering(717) 00:17:08.784 fused_ordering(718) 00:17:08.784 fused_ordering(719) 00:17:08.784 fused_ordering(720) 00:17:08.784 fused_ordering(721) 00:17:08.784 fused_ordering(722) 00:17:08.784 fused_ordering(723) 00:17:08.784 fused_ordering(724) 00:17:08.784 fused_ordering(725) 00:17:08.784 fused_ordering(726) 00:17:08.784 fused_ordering(727) 00:17:08.784 fused_ordering(728) 00:17:08.784 fused_ordering(729) 00:17:08.784 fused_ordering(730) 00:17:08.784 fused_ordering(731) 00:17:08.784 fused_ordering(732) 00:17:08.784 fused_ordering(733) 00:17:08.784 fused_ordering(734) 00:17:08.784 fused_ordering(735) 00:17:08.784 fused_ordering(736) 00:17:08.784 fused_ordering(737) 00:17:08.784 fused_ordering(738) 00:17:08.784 fused_ordering(739) 00:17:08.784 fused_ordering(740) 00:17:08.784 fused_ordering(741) 00:17:08.784 fused_ordering(742) 00:17:08.784 fused_ordering(743) 00:17:08.784 fused_ordering(744) 00:17:08.784 fused_ordering(745) 00:17:08.784 fused_ordering(746) 00:17:08.784 fused_ordering(747) 00:17:08.784 fused_ordering(748) 00:17:08.784 fused_ordering(749) 00:17:08.784 fused_ordering(750) 00:17:08.784 fused_ordering(751) 00:17:08.784 fused_ordering(752) 00:17:08.784 fused_ordering(753) 00:17:08.784 fused_ordering(754) 00:17:08.784 fused_ordering(755) 00:17:08.784 fused_ordering(756) 00:17:08.784 fused_ordering(757) 00:17:08.784 fused_ordering(758) 00:17:08.784 fused_ordering(759) 00:17:08.784 fused_ordering(760) 00:17:08.784 fused_ordering(761) 00:17:08.784 fused_ordering(762) 00:17:08.784 fused_ordering(763) 00:17:08.784 fused_ordering(764) 00:17:08.784 fused_ordering(765) 00:17:08.784 fused_ordering(766) 00:17:08.784 fused_ordering(767) 00:17:08.784 fused_ordering(768) 00:17:08.784 fused_ordering(769) 00:17:08.784 fused_ordering(770) 00:17:08.784 fused_ordering(771) 00:17:08.784 fused_ordering(772) 00:17:08.784 fused_ordering(773) 00:17:08.784 fused_ordering(774) 00:17:08.784 fused_ordering(775) 00:17:08.784 fused_ordering(776) 00:17:08.784 fused_ordering(777) 00:17:08.784 fused_ordering(778) 00:17:08.784 fused_ordering(779) 00:17:08.784 fused_ordering(780) 00:17:08.784 fused_ordering(781) 00:17:08.784 fused_ordering(782) 00:17:08.784 fused_ordering(783) 00:17:08.784 fused_ordering(784) 00:17:08.784 fused_ordering(785) 00:17:08.784 fused_ordering(786) 00:17:08.784 fused_ordering(787) 00:17:08.784 fused_ordering(788) 00:17:08.784 fused_ordering(789) 00:17:08.784 fused_ordering(790) 00:17:08.784 fused_ordering(791) 00:17:08.784 fused_ordering(792) 00:17:08.784 fused_ordering(793) 00:17:08.784 fused_ordering(794) 00:17:08.784 fused_ordering(795) 00:17:08.784 fused_ordering(796) 00:17:08.784 fused_ordering(797) 00:17:08.784 fused_ordering(798) 00:17:08.784 fused_ordering(799) 00:17:08.784 fused_ordering(800) 00:17:08.784 fused_ordering(801) 00:17:08.784 fused_ordering(802) 00:17:08.784 fused_ordering(803) 00:17:08.784 fused_ordering(804) 00:17:08.784 fused_ordering(805) 00:17:08.784 fused_ordering(806) 00:17:08.784 fused_ordering(807) 00:17:08.784 fused_ordering(808) 00:17:08.784 fused_ordering(809) 00:17:08.784 fused_ordering(810) 00:17:08.784 fused_ordering(811) 00:17:08.784 fused_ordering(812) 00:17:08.784 fused_ordering(813) 00:17:08.784 fused_ordering(814) 00:17:08.784 fused_ordering(815) 00:17:08.784 fused_ordering(816) 00:17:08.784 fused_ordering(817) 00:17:08.784 fused_ordering(818) 00:17:08.784 fused_ordering(819) 00:17:08.784 fused_ordering(820) 00:17:09.046 fused_ordering(821) 00:17:09.046 fused_ordering(822) 00:17:09.046 fused_ordering(823) 00:17:09.046 fused_ordering(824) 00:17:09.046 fused_ordering(825) 00:17:09.046 fused_ordering(826) 00:17:09.046 fused_ordering(827) 00:17:09.046 fused_ordering(828) 00:17:09.046 fused_ordering(829) 00:17:09.046 fused_ordering(830) 00:17:09.046 fused_ordering(831) 00:17:09.046 fused_ordering(832) 00:17:09.046 fused_ordering(833) 00:17:09.046 fused_ordering(834) 00:17:09.046 fused_ordering(835) 00:17:09.046 fused_ordering(836) 00:17:09.046 fused_ordering(837) 00:17:09.046 fused_ordering(838) 00:17:09.046 fused_ordering(839) 00:17:09.046 fused_ordering(840) 00:17:09.046 fused_ordering(841) 00:17:09.046 fused_ordering(842) 00:17:09.046 fused_ordering(843) 00:17:09.046 fused_ordering(844) 00:17:09.046 fused_ordering(845) 00:17:09.046 fused_ordering(846) 00:17:09.046 fused_ordering(847) 00:17:09.046 fused_ordering(848) 00:17:09.046 fused_ordering(849) 00:17:09.046 fused_ordering(850) 00:17:09.046 fused_ordering(851) 00:17:09.046 fused_ordering(852) 00:17:09.046 fused_ordering(853) 00:17:09.046 fused_ordering(854) 00:17:09.046 fused_ordering(855) 00:17:09.046 fused_ordering(856) 00:17:09.046 fused_ordering(857) 00:17:09.046 fused_ordering(858) 00:17:09.046 fused_ordering(859) 00:17:09.046 fused_ordering(860) 00:17:09.046 fused_ordering(861) 00:17:09.046 fused_ordering(862) 00:17:09.046 fused_ordering(863) 00:17:09.046 fused_ordering(864) 00:17:09.046 fused_ordering(865) 00:17:09.046 fused_ordering(866) 00:17:09.046 fused_ordering(867) 00:17:09.046 fused_ordering(868) 00:17:09.046 fused_ordering(869) 00:17:09.046 fused_ordering(870) 00:17:09.046 fused_ordering(871) 00:17:09.046 fused_ordering(872) 00:17:09.046 fused_ordering(873) 00:17:09.046 fused_ordering(874) 00:17:09.046 fused_ordering(875) 00:17:09.046 fused_ordering(876) 00:17:09.046 fused_ordering(877) 00:17:09.046 fused_ordering(878) 00:17:09.046 fused_ordering(879) 00:17:09.046 fused_ordering(880) 00:17:09.046 fused_ordering(881) 00:17:09.046 fused_ordering(882) 00:17:09.046 fused_ordering(883) 00:17:09.046 fused_ordering(884) 00:17:09.046 fused_ordering(885) 00:17:09.046 fused_ordering(886) 00:17:09.046 fused_ordering(887) 00:17:09.046 fused_ordering(888) 00:17:09.046 fused_ordering(889) 00:17:09.046 fused_ordering(890) 00:17:09.046 fused_ordering(891) 00:17:09.046 fused_ordering(892) 00:17:09.046 fused_ordering(893) 00:17:09.046 fused_ordering(894) 00:17:09.046 fused_ordering(895) 00:17:09.046 fused_ordering(896) 00:17:09.046 fused_ordering(897) 00:17:09.046 fused_ordering(898) 00:17:09.046 fused_ordering(899) 00:17:09.046 fused_ordering(900) 00:17:09.046 fused_ordering(901) 00:17:09.046 fused_ordering(902) 00:17:09.046 fused_ordering(903) 00:17:09.046 fused_ordering(904) 00:17:09.046 fused_ordering(905) 00:17:09.046 fused_ordering(906) 00:17:09.046 fused_ordering(907) 00:17:09.046 fused_ordering(908) 00:17:09.046 fused_ordering(909) 00:17:09.046 fused_ordering(910) 00:17:09.046 fused_ordering(911) 00:17:09.046 fused_ordering(912) 00:17:09.046 fused_ordering(913) 00:17:09.046 fused_ordering(914) 00:17:09.046 fused_ordering(915) 00:17:09.046 fused_ordering(916) 00:17:09.046 fused_ordering(917) 00:17:09.046 fused_ordering(918) 00:17:09.046 fused_ordering(919) 00:17:09.046 fused_ordering(920) 00:17:09.046 fused_ordering(921) 00:17:09.046 fused_ordering(922) 00:17:09.046 fused_ordering(923) 00:17:09.046 fused_ordering(924) 00:17:09.046 fused_ordering(925) 00:17:09.046 fused_ordering(926) 00:17:09.046 fused_ordering(927) 00:17:09.046 fused_ordering(928) 00:17:09.046 fused_ordering(929) 00:17:09.046 fused_ordering(930) 00:17:09.046 fused_ordering(931) 00:17:09.046 fused_ordering(932) 00:17:09.046 fused_ordering(933) 00:17:09.046 fused_ordering(934) 00:17:09.046 fused_ordering(935) 00:17:09.046 fused_ordering(936) 00:17:09.046 fused_ordering(937) 00:17:09.046 fused_ordering(938) 00:17:09.046 fused_ordering(939) 00:17:09.046 fused_ordering(940) 00:17:09.046 fused_ordering(941) 00:17:09.046 fused_ordering(942) 00:17:09.046 fused_ordering(943) 00:17:09.046 fused_ordering(944) 00:17:09.046 fused_ordering(945) 00:17:09.046 fused_ordering(946) 00:17:09.046 fused_ordering(947) 00:17:09.046 fused_ordering(948) 00:17:09.046 fused_ordering(949) 00:17:09.046 fused_ordering(950) 00:17:09.046 fused_ordering(951) 00:17:09.046 fused_ordering(952) 00:17:09.046 fused_ordering(953) 00:17:09.046 fused_ordering(954) 00:17:09.046 fused_ordering(955) 00:17:09.046 fused_ordering(956) 00:17:09.046 fused_ordering(957) 00:17:09.046 fused_ordering(958) 00:17:09.046 fused_ordering(959) 00:17:09.046 fused_ordering(960) 00:17:09.046 fused_ordering(961) 00:17:09.046 fused_ordering(962) 00:17:09.046 fused_ordering(963) 00:17:09.047 fused_ordering(964) 00:17:09.047 fused_ordering(965) 00:17:09.047 fused_ordering(966) 00:17:09.047 fused_ordering(967) 00:17:09.047 fused_ordering(968) 00:17:09.047 fused_ordering(969) 00:17:09.047 fused_ordering(970) 00:17:09.047 fused_ordering(971) 00:17:09.047 fused_ordering(972) 00:17:09.047 fused_ordering(973) 00:17:09.047 fused_ordering(974) 00:17:09.047 fused_ordering(975) 00:17:09.047 fused_ordering(976) 00:17:09.047 fused_ordering(977) 00:17:09.047 fused_ordering(978) 00:17:09.047 fused_ordering(979) 00:17:09.047 fused_ordering(980) 00:17:09.047 fused_ordering(981) 00:17:09.047 fused_ordering(982) 00:17:09.047 fused_ordering(983) 00:17:09.047 fused_ordering(984) 00:17:09.047 fused_ordering(985) 00:17:09.047 fused_ordering(986) 00:17:09.047 fused_ordering(987) 00:17:09.047 fused_ordering(988) 00:17:09.047 fused_ordering(989) 00:17:09.047 fused_ordering(990) 00:17:09.047 fused_ordering(991) 00:17:09.047 fused_ordering(992) 00:17:09.047 fused_ordering(993) 00:17:09.047 fused_ordering(994) 00:17:09.047 fused_ordering(995) 00:17:09.047 fused_ordering(996) 00:17:09.047 fused_ordering(997) 00:17:09.047 fused_ordering(998) 00:17:09.047 fused_ordering(999) 00:17:09.047 fused_ordering(1000) 00:17:09.047 fused_ordering(1001) 00:17:09.047 fused_ordering(1002) 00:17:09.047 fused_ordering(1003) 00:17:09.047 fused_ordering(1004) 00:17:09.047 fused_ordering(1005) 00:17:09.047 fused_ordering(1006) 00:17:09.047 fused_ordering(1007) 00:17:09.047 fused_ordering(1008) 00:17:09.047 fused_ordering(1009) 00:17:09.047 fused_ordering(1010) 00:17:09.047 fused_ordering(1011) 00:17:09.047 fused_ordering(1012) 00:17:09.047 fused_ordering(1013) 00:17:09.047 fused_ordering(1014) 00:17:09.047 fused_ordering(1015) 00:17:09.047 fused_ordering(1016) 00:17:09.047 fused_ordering(1017) 00:17:09.047 fused_ordering(1018) 00:17:09.047 fused_ordering(1019) 00:17:09.047 fused_ordering(1020) 00:17:09.047 fused_ordering(1021) 00:17:09.047 fused_ordering(1022) 00:17:09.047 fused_ordering(1023) 00:17:09.047 12:44:42 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:09.047 12:44:42 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:09.047 12:44:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:09.047 12:44:42 -- nvmf/common.sh@116 -- # sync 00:17:09.047 12:44:42 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:09.047 12:44:42 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:09.047 12:44:42 -- nvmf/common.sh@119 -- # set +e 00:17:09.047 12:44:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:09.047 12:44:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:09.047 rmmod nvme_rdma 00:17:09.047 rmmod nvme_fabrics 00:17:09.308 12:44:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:09.308 12:44:42 -- nvmf/common.sh@123 -- # set -e 00:17:09.308 12:44:42 -- nvmf/common.sh@124 -- # return 0 00:17:09.308 12:44:42 -- nvmf/common.sh@477 -- # '[' -n 486788 ']' 00:17:09.308 12:44:42 -- nvmf/common.sh@478 -- # killprocess 486788 00:17:09.308 12:44:42 -- common/autotest_common.sh@936 -- # '[' -z 486788 ']' 00:17:09.308 12:44:42 -- common/autotest_common.sh@940 -- # kill -0 486788 00:17:09.308 12:44:42 -- common/autotest_common.sh@941 -- # uname 00:17:09.308 12:44:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.308 12:44:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 486788 00:17:09.308 12:44:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:09.308 12:44:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:09.308 12:44:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 486788' 00:17:09.308 killing process with pid 486788 00:17:09.308 12:44:42 -- common/autotest_common.sh@955 -- # kill 486788 00:17:09.308 12:44:42 -- common/autotest_common.sh@960 -- # wait 486788 00:17:09.569 12:44:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:09.569 12:44:42 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:09.569 00:17:09.569 real 0m9.733s 00:17:09.569 user 0m5.261s 00:17:09.569 sys 0m5.891s 00:17:09.569 12:44:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:09.569 12:44:42 -- common/autotest_common.sh@10 -- # set +x 00:17:09.569 ************************************ 00:17:09.569 END TEST nvmf_fused_ordering 00:17:09.569 ************************************ 00:17:09.569 12:44:42 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:17:09.569 12:44:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:09.569 12:44:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.569 12:44:42 -- common/autotest_common.sh@10 -- # set +x 00:17:09.569 ************************************ 00:17:09.569 START TEST nvmf_delete_subsystem 00:17:09.569 ************************************ 00:17:09.569 12:44:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:17:09.569 * Looking for test storage... 00:17:09.569 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:09.569 12:44:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:09.569 12:44:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:09.569 12:44:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:09.569 12:44:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:09.569 12:44:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:09.569 12:44:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:09.569 12:44:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:09.569 12:44:42 -- scripts/common.sh@335 -- # IFS=.-: 00:17:09.569 12:44:42 -- scripts/common.sh@335 -- # read -ra ver1 00:17:09.569 12:44:42 -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.569 12:44:42 -- scripts/common.sh@336 -- # read -ra ver2 00:17:09.569 12:44:42 -- scripts/common.sh@337 -- # local 'op=<' 00:17:09.569 12:44:42 -- scripts/common.sh@339 -- # ver1_l=2 00:17:09.569 12:44:42 -- scripts/common.sh@340 -- # ver2_l=1 00:17:09.569 12:44:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:09.569 12:44:42 -- scripts/common.sh@343 -- # case "$op" in 00:17:09.569 12:44:42 -- scripts/common.sh@344 -- # : 1 00:17:09.569 12:44:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:09.569 12:44:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.569 12:44:42 -- scripts/common.sh@364 -- # decimal 1 00:17:09.569 12:44:42 -- scripts/common.sh@352 -- # local d=1 00:17:09.569 12:44:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.569 12:44:42 -- scripts/common.sh@354 -- # echo 1 00:17:09.569 12:44:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:09.569 12:44:42 -- scripts/common.sh@365 -- # decimal 2 00:17:09.569 12:44:42 -- scripts/common.sh@352 -- # local d=2 00:17:09.569 12:44:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.569 12:44:42 -- scripts/common.sh@354 -- # echo 2 00:17:09.569 12:44:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:09.569 12:44:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:09.569 12:44:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:09.569 12:44:42 -- scripts/common.sh@367 -- # return 0 00:17:09.569 12:44:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.569 12:44:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:09.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.569 --rc genhtml_branch_coverage=1 00:17:09.569 --rc genhtml_function_coverage=1 00:17:09.569 --rc genhtml_legend=1 00:17:09.569 --rc geninfo_all_blocks=1 00:17:09.569 --rc geninfo_unexecuted_blocks=1 00:17:09.569 00:17:09.569 ' 00:17:09.570 12:44:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:09.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.570 --rc genhtml_branch_coverage=1 00:17:09.570 --rc genhtml_function_coverage=1 00:17:09.570 --rc genhtml_legend=1 00:17:09.570 --rc geninfo_all_blocks=1 00:17:09.570 --rc geninfo_unexecuted_blocks=1 00:17:09.570 00:17:09.570 ' 00:17:09.570 12:44:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:09.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.570 --rc genhtml_branch_coverage=1 00:17:09.570 --rc genhtml_function_coverage=1 00:17:09.570 --rc genhtml_legend=1 00:17:09.570 --rc geninfo_all_blocks=1 00:17:09.570 --rc geninfo_unexecuted_blocks=1 00:17:09.570 00:17:09.570 ' 00:17:09.570 12:44:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:09.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.570 --rc genhtml_branch_coverage=1 00:17:09.570 --rc genhtml_function_coverage=1 00:17:09.570 --rc genhtml_legend=1 00:17:09.570 --rc geninfo_all_blocks=1 00:17:09.570 --rc geninfo_unexecuted_blocks=1 00:17:09.570 00:17:09.570 ' 00:17:09.570 12:44:42 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.570 12:44:42 -- nvmf/common.sh@7 -- # uname -s 00:17:09.570 12:44:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.570 12:44:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.570 12:44:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.570 12:44:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.831 12:44:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.831 12:44:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.831 12:44:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.831 12:44:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.831 12:44:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.831 12:44:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.831 12:44:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:09.831 12:44:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:09.831 12:44:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.831 12:44:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.831 12:44:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.831 12:44:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:09.831 12:44:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.831 12:44:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.831 12:44:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.831 12:44:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.831 12:44:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.831 12:44:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.831 12:44:42 -- paths/export.sh@5 -- # export PATH 00:17:09.831 12:44:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.831 12:44:42 -- nvmf/common.sh@46 -- # : 0 00:17:09.831 12:44:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:09.831 12:44:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:09.831 12:44:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:09.831 12:44:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.831 12:44:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.831 12:44:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:09.831 12:44:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:09.831 12:44:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:09.831 12:44:42 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:09.831 12:44:42 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:09.831 12:44:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.831 12:44:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:09.831 12:44:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:09.831 12:44:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:09.831 12:44:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.831 12:44:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.831 12:44:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.831 12:44:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:09.831 12:44:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:09.831 12:44:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:09.831 12:44:42 -- common/autotest_common.sh@10 -- # set +x 00:17:17.973 12:44:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:17.973 12:44:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:17.973 12:44:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:17.973 12:44:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:17.973 12:44:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:17.973 12:44:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:17.973 12:44:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:17.973 12:44:49 -- nvmf/common.sh@294 -- # net_devs=() 00:17:17.973 12:44:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:17.973 12:44:49 -- nvmf/common.sh@295 -- # e810=() 00:17:17.973 12:44:49 -- nvmf/common.sh@295 -- # local -ga e810 00:17:17.973 12:44:49 -- nvmf/common.sh@296 -- # x722=() 00:17:17.973 12:44:49 -- nvmf/common.sh@296 -- # local -ga x722 00:17:17.973 12:44:49 -- nvmf/common.sh@297 -- # mlx=() 00:17:17.973 12:44:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:17.973 12:44:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.973 12:44:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.973 12:44:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.973 12:44:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.974 12:44:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.974 12:44:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.974 12:44:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.974 12:44:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.974 12:44:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.974 12:44:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.974 12:44:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.974 12:44:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:17.974 12:44:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:17.974 12:44:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:17.974 12:44:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:17.974 12:44:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:17.974 12:44:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:17:17.974 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:17:17.974 12:44:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:17.974 12:44:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:17:17.974 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:17:17.974 12:44:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:17.974 12:44:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:17.974 12:44:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.974 12:44:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:17.974 12:44:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.974 12:44:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:17:17.974 Found net devices under 0000:98:00.0: mlx_0_0 00:17:17.974 12:44:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.974 12:44:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.974 12:44:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:17.974 12:44:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.974 12:44:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:17:17.974 Found net devices under 0000:98:00.1: mlx_0_1 00:17:17.974 12:44:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.974 12:44:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:17.974 12:44:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:17.974 12:44:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:17.974 12:44:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:17.974 12:44:49 -- nvmf/common.sh@57 -- # uname 00:17:17.974 12:44:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:17.974 12:44:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:17.974 12:44:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:17.974 12:44:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:17.974 12:44:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:17.974 12:44:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:17.974 12:44:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:17.974 12:44:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:17.974 12:44:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:17.974 12:44:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:17.974 12:44:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:17.974 12:44:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:17.974 12:44:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:17.974 12:44:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:17.974 12:44:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:17.974 12:44:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:17.974 12:44:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:17.974 12:44:49 -- nvmf/common.sh@104 -- # continue 2 00:17:17.974 12:44:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:17.974 12:44:49 -- nvmf/common.sh@104 -- # continue 2 00:17:17.974 12:44:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:17.974 12:44:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:17.974 12:44:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:17.974 12:44:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:17.974 12:44:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:17.974 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:17.974 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:17:17.974 altname enp152s0f0np0 00:17:17.974 altname ens817f0np0 00:17:17.974 inet 192.168.100.8/24 scope global mlx_0_0 00:17:17.974 valid_lft forever preferred_lft forever 00:17:17.974 12:44:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:17.974 12:44:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:17.974 12:44:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:17.974 12:44:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:17.974 12:44:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:17.974 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:17.974 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:17:17.974 altname enp152s0f1np1 00:17:17.974 altname ens817f1np1 00:17:17.974 inet 192.168.100.9/24 scope global mlx_0_1 00:17:17.974 valid_lft forever preferred_lft forever 00:17:17.974 12:44:49 -- nvmf/common.sh@410 -- # return 0 00:17:17.974 12:44:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:17.974 12:44:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:17.974 12:44:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:17.974 12:44:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:17.974 12:44:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:17.974 12:44:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:17.974 12:44:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:17.974 12:44:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:17.974 12:44:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:17.974 12:44:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:17.974 12:44:49 -- nvmf/common.sh@104 -- # continue 2 00:17:17.974 12:44:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.974 12:44:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:17.974 12:44:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:17.974 12:44:49 -- nvmf/common.sh@104 -- # continue 2 00:17:17.974 12:44:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:17.974 12:44:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:17.974 12:44:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:17.974 12:44:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:17.974 12:44:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:17.974 12:44:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:17.974 12:44:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:17.974 12:44:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:17.974 192.168.100.9' 00:17:17.974 12:44:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:17.975 192.168.100.9' 00:17:17.975 12:44:49 -- nvmf/common.sh@445 -- # head -n 1 00:17:17.975 12:44:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:17.975 12:44:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:17.975 192.168.100.9' 00:17:17.975 12:44:49 -- nvmf/common.sh@446 -- # tail -n +2 00:17:17.975 12:44:49 -- nvmf/common.sh@446 -- # head -n 1 00:17:17.975 12:44:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:17.975 12:44:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:17.975 12:44:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:17.975 12:44:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:17.975 12:44:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:17.975 12:44:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:17.975 12:44:49 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:17.975 12:44:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:17.975 12:44:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:17.975 12:44:49 -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 12:44:49 -- nvmf/common.sh@469 -- # nvmfpid=491163 00:17:17.975 12:44:49 -- nvmf/common.sh@470 -- # waitforlisten 491163 00:17:17.975 12:44:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:17.975 12:44:49 -- common/autotest_common.sh@829 -- # '[' -z 491163 ']' 00:17:17.975 12:44:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.975 12:44:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.975 12:44:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.975 12:44:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.975 12:44:49 -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 [2024-11-20 12:44:49.871167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:17.975 [2024-11-20 12:44:49.871234] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.975 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.975 [2024-11-20 12:44:49.932691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:17.975 [2024-11-20 12:44:49.997647] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:17.975 [2024-11-20 12:44:49.997763] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.975 [2024-11-20 12:44:49.997771] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.975 [2024-11-20 12:44:49.997779] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.975 [2024-11-20 12:44:49.997919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.975 [2024-11-20 12:44:49.997920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.975 12:44:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.975 12:44:50 -- common/autotest_common.sh@862 -- # return 0 00:17:17.975 12:44:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:17.975 12:44:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:17.975 12:44:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 12:44:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.975 12:44:50 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:17.975 12:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.975 12:44:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 [2024-11-20 12:44:50.723387] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x235e1a0/0x2362690) succeed. 00:17:17.975 [2024-11-20 12:44:50.736522] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x235f6a0/0x23a3d30) succeed. 00:17:17.975 12:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.975 12:44:50 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:17.975 12:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.975 12:44:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 12:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.975 12:44:50 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:17.975 12:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.975 12:44:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 [2024-11-20 12:44:50.821504] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:17.975 12:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.975 12:44:50 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:17.975 12:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.975 12:44:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 NULL1 00:17:17.975 12:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.975 12:44:50 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:17.975 12:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.975 12:44:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 Delay0 00:17:17.975 12:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.975 12:44:50 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:17.975 12:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.975 12:44:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 12:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.975 12:44:50 -- target/delete_subsystem.sh@28 -- # perf_pid=491227 00:17:17.975 12:44:50 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:17.975 12:44:50 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:17.975 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.975 [2024-11-20 12:44:50.929915] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:19.888 12:44:52 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.889 12:44:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.889 12:44:52 -- common/autotest_common.sh@10 -- # set +x 00:17:21.272 NVMe io qpair process completion error 00:17:21.272 NVMe io qpair process completion error 00:17:21.272 NVMe io qpair process completion error 00:17:21.272 NVMe io qpair process completion error 00:17:21.272 NVMe io qpair process completion error 00:17:21.272 NVMe io qpair process completion error 00:17:21.272 12:44:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.272 12:44:54 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:21.272 12:44:54 -- target/delete_subsystem.sh@35 -- # kill -0 491227 00:17:21.272 12:44:54 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:21.533 12:44:54 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:21.533 12:44:54 -- target/delete_subsystem.sh@35 -- # kill -0 491227 00:17:21.533 12:44:54 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Write completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.117 Read completed with error (sct=0, sc=8) 00:17:22.117 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 starting I/O failed: -6 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Write completed with error (sct=0, sc=8) 00:17:22.118 Read completed with error (sct=0, sc=8) 00:17:22.118 [2024-11-20 12:44:55.023211] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:17:22.118 12:44:55 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:22.118 12:44:55 -- target/delete_subsystem.sh@35 -- # kill -0 491227 00:17:22.118 12:44:55 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:22.118 [2024-11-20 12:44:55.037262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:22.118 [2024-11-20 12:44:55.037275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:22.118 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:22.118 Initializing NVMe Controllers 00:17:22.118 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:22.118 Controller IO queue size 128, less than required. 00:17:22.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:22.118 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:22.118 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:22.118 Initialization complete. Launching workers. 00:17:22.118 ======================================================== 00:17:22.119 Latency(us) 00:17:22.119 Device Information : IOPS MiB/s Average min max 00:17:22.119 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.65 0.04 1591400.41 1000122.47 2968775.78 00:17:22.119 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.65 0.04 1592831.43 1000632.21 2969927.03 00:17:22.119 ======================================================== 00:17:22.119 Total : 161.30 0.08 1592115.92 1000122.47 2969927.03 00:17:22.119 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@35 -- # kill -0 491227 00:17:22.691 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (491227) - No such process 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@45 -- # NOT wait 491227 00:17:22.691 12:44:55 -- common/autotest_common.sh@650 -- # local es=0 00:17:22.691 12:44:55 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 491227 00:17:22.691 12:44:55 -- common/autotest_common.sh@638 -- # local arg=wait 00:17:22.691 12:44:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.691 12:44:55 -- common/autotest_common.sh@642 -- # type -t wait 00:17:22.691 12:44:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.691 12:44:55 -- common/autotest_common.sh@653 -- # wait 491227 00:17:22.691 12:44:55 -- common/autotest_common.sh@653 -- # es=1 00:17:22.691 12:44:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.691 12:44:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.691 12:44:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:22.691 12:44:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.691 12:44:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.691 12:44:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:22.691 12:44:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.691 12:44:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.691 [2024-11-20 12:44:55.557432] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:22.691 12:44:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:22.691 12:44:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.691 12:44:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.691 12:44:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@54 -- # perf_pid=492209 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:22.691 12:44:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:22.691 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.691 [2024-11-20 12:44:55.654210] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:23.262 12:44:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:23.262 12:44:56 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:23.262 12:44:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:23.524 12:44:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:23.524 12:44:56 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:23.524 12:44:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:24.096 12:44:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:24.096 12:44:57 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:24.096 12:44:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:24.667 12:44:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:24.668 12:44:57 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:24.668 12:44:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:25.240 12:44:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:25.240 12:44:58 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:25.240 12:44:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:25.501 12:44:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:25.501 12:44:58 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:25.501 12:44:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:26.072 12:44:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:26.072 12:44:59 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:26.072 12:44:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:26.642 12:44:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:26.642 12:44:59 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:26.642 12:44:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:27.213 12:45:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:27.213 12:45:00 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:27.213 12:45:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:27.785 12:45:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:27.785 12:45:00 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:27.785 12:45:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:28.047 12:45:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:28.047 12:45:01 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:28.047 12:45:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:28.619 12:45:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:28.619 12:45:01 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:28.619 12:45:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:29.190 12:45:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:29.190 12:45:02 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:29.190 12:45:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:29.761 12:45:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:29.761 12:45:02 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:29.761 12:45:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:29.761 Initializing NVMe Controllers 00:17:29.761 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:29.761 Controller IO queue size 128, less than required. 00:17:29.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:29.761 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:29.761 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:29.761 Initialization complete. Launching workers. 00:17:29.761 ======================================================== 00:17:29.761 Latency(us) 00:17:29.761 Device Information : IOPS MiB/s Average min max 00:17:29.761 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001196.08 1000057.98 1003456.78 00:17:29.761 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001638.01 1000039.16 1005452.97 00:17:29.761 ======================================================== 00:17:29.761 Total : 256.00 0.12 1001417.04 1000039.16 1005452.97 00:17:29.761 00:17:30.332 12:45:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:30.332 12:45:03 -- target/delete_subsystem.sh@57 -- # kill -0 492209 00:17:30.332 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (492209) - No such process 00:17:30.332 12:45:03 -- target/delete_subsystem.sh@67 -- # wait 492209 00:17:30.332 12:45:03 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:30.332 12:45:03 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:30.332 12:45:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:30.332 12:45:03 -- nvmf/common.sh@116 -- # sync 00:17:30.332 12:45:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:30.332 12:45:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:30.332 12:45:03 -- nvmf/common.sh@119 -- # set +e 00:17:30.332 12:45:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:30.332 12:45:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:30.332 rmmod nvme_rdma 00:17:30.332 rmmod nvme_fabrics 00:17:30.332 12:45:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:30.332 12:45:03 -- nvmf/common.sh@123 -- # set -e 00:17:30.332 12:45:03 -- nvmf/common.sh@124 -- # return 0 00:17:30.332 12:45:03 -- nvmf/common.sh@477 -- # '[' -n 491163 ']' 00:17:30.332 12:45:03 -- nvmf/common.sh@478 -- # killprocess 491163 00:17:30.332 12:45:03 -- common/autotest_common.sh@936 -- # '[' -z 491163 ']' 00:17:30.332 12:45:03 -- common/autotest_common.sh@940 -- # kill -0 491163 00:17:30.332 12:45:03 -- common/autotest_common.sh@941 -- # uname 00:17:30.332 12:45:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.332 12:45:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 491163 00:17:30.332 12:45:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:30.332 12:45:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:30.332 12:45:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 491163' 00:17:30.332 killing process with pid 491163 00:17:30.332 12:45:03 -- common/autotest_common.sh@955 -- # kill 491163 00:17:30.332 12:45:03 -- common/autotest_common.sh@960 -- # wait 491163 00:17:30.594 12:45:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:30.594 12:45:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:30.594 00:17:30.594 real 0m20.978s 00:17:30.594 user 0m50.189s 00:17:30.594 sys 0m6.474s 00:17:30.594 12:45:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:30.594 12:45:03 -- common/autotest_common.sh@10 -- # set +x 00:17:30.594 ************************************ 00:17:30.594 END TEST nvmf_delete_subsystem 00:17:30.594 ************************************ 00:17:30.594 12:45:03 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:17:30.594 12:45:03 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:30.594 12:45:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:30.594 12:45:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:30.594 12:45:03 -- common/autotest_common.sh@10 -- # set +x 00:17:30.594 ************************************ 00:17:30.594 START TEST nvmf_nvme_cli 00:17:30.594 ************************************ 00:17:30.594 12:45:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:30.594 * Looking for test storage... 00:17:30.594 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:30.594 12:45:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:30.594 12:45:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:30.594 12:45:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:30.594 12:45:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:30.594 12:45:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:30.594 12:45:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:30.594 12:45:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:30.594 12:45:03 -- scripts/common.sh@335 -- # IFS=.-: 00:17:30.594 12:45:03 -- scripts/common.sh@335 -- # read -ra ver1 00:17:30.594 12:45:03 -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.594 12:45:03 -- scripts/common.sh@336 -- # read -ra ver2 00:17:30.595 12:45:03 -- scripts/common.sh@337 -- # local 'op=<' 00:17:30.595 12:45:03 -- scripts/common.sh@339 -- # ver1_l=2 00:17:30.595 12:45:03 -- scripts/common.sh@340 -- # ver2_l=1 00:17:30.595 12:45:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:30.595 12:45:03 -- scripts/common.sh@343 -- # case "$op" in 00:17:30.595 12:45:03 -- scripts/common.sh@344 -- # : 1 00:17:30.595 12:45:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:30.595 12:45:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.595 12:45:03 -- scripts/common.sh@364 -- # decimal 1 00:17:30.595 12:45:03 -- scripts/common.sh@352 -- # local d=1 00:17:30.595 12:45:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.595 12:45:03 -- scripts/common.sh@354 -- # echo 1 00:17:30.595 12:45:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:30.595 12:45:03 -- scripts/common.sh@365 -- # decimal 2 00:17:30.595 12:45:03 -- scripts/common.sh@352 -- # local d=2 00:17:30.595 12:45:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.595 12:45:03 -- scripts/common.sh@354 -- # echo 2 00:17:30.595 12:45:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:30.595 12:45:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:30.595 12:45:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:30.595 12:45:03 -- scripts/common.sh@367 -- # return 0 00:17:30.595 12:45:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.595 12:45:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:30.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.595 --rc genhtml_branch_coverage=1 00:17:30.595 --rc genhtml_function_coverage=1 00:17:30.595 --rc genhtml_legend=1 00:17:30.595 --rc geninfo_all_blocks=1 00:17:30.595 --rc geninfo_unexecuted_blocks=1 00:17:30.595 00:17:30.595 ' 00:17:30.595 12:45:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:30.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.595 --rc genhtml_branch_coverage=1 00:17:30.595 --rc genhtml_function_coverage=1 00:17:30.595 --rc genhtml_legend=1 00:17:30.595 --rc geninfo_all_blocks=1 00:17:30.595 --rc geninfo_unexecuted_blocks=1 00:17:30.595 00:17:30.595 ' 00:17:30.595 12:45:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:30.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.595 --rc genhtml_branch_coverage=1 00:17:30.595 --rc genhtml_function_coverage=1 00:17:30.595 --rc genhtml_legend=1 00:17:30.595 --rc geninfo_all_blocks=1 00:17:30.595 --rc geninfo_unexecuted_blocks=1 00:17:30.595 00:17:30.595 ' 00:17:30.595 12:45:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:30.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.595 --rc genhtml_branch_coverage=1 00:17:30.595 --rc genhtml_function_coverage=1 00:17:30.595 --rc genhtml_legend=1 00:17:30.595 --rc geninfo_all_blocks=1 00:17:30.595 --rc geninfo_unexecuted_blocks=1 00:17:30.595 00:17:30.595 ' 00:17:30.595 12:45:03 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.595 12:45:03 -- nvmf/common.sh@7 -- # uname -s 00:17:30.595 12:45:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.595 12:45:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.595 12:45:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.595 12:45:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.595 12:45:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.595 12:45:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.595 12:45:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.595 12:45:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.595 12:45:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.856 12:45:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.856 12:45:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:30.856 12:45:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:30.856 12:45:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.856 12:45:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.856 12:45:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.856 12:45:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:30.856 12:45:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.856 12:45:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.856 12:45:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.856 12:45:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.856 12:45:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.856 12:45:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.856 12:45:03 -- paths/export.sh@5 -- # export PATH 00:17:30.856 12:45:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.856 12:45:03 -- nvmf/common.sh@46 -- # : 0 00:17:30.856 12:45:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:30.856 12:45:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:30.856 12:45:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:30.856 12:45:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.856 12:45:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.856 12:45:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:30.856 12:45:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:30.856 12:45:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:30.856 12:45:03 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:30.856 12:45:03 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:30.856 12:45:03 -- target/nvme_cli.sh@14 -- # devs=() 00:17:30.856 12:45:03 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:30.856 12:45:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:30.856 12:45:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.856 12:45:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:30.856 12:45:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:30.856 12:45:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:30.856 12:45:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.856 12:45:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.856 12:45:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.856 12:45:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:30.856 12:45:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:30.856 12:45:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:30.856 12:45:03 -- common/autotest_common.sh@10 -- # set +x 00:17:39.005 12:45:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:39.005 12:45:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:39.005 12:45:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:39.005 12:45:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:39.005 12:45:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:39.005 12:45:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:39.005 12:45:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:39.005 12:45:10 -- nvmf/common.sh@294 -- # net_devs=() 00:17:39.005 12:45:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:39.005 12:45:10 -- nvmf/common.sh@295 -- # e810=() 00:17:39.005 12:45:10 -- nvmf/common.sh@295 -- # local -ga e810 00:17:39.005 12:45:10 -- nvmf/common.sh@296 -- # x722=() 00:17:39.005 12:45:10 -- nvmf/common.sh@296 -- # local -ga x722 00:17:39.005 12:45:10 -- nvmf/common.sh@297 -- # mlx=() 00:17:39.005 12:45:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:39.005 12:45:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.005 12:45:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:39.005 12:45:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:39.005 12:45:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:39.005 12:45:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:39.005 12:45:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:39.005 12:45:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:39.005 12:45:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:17:39.005 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:17:39.005 12:45:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:39.005 12:45:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:39.005 12:45:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:17:39.005 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:17:39.005 12:45:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:39.005 12:45:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:39.005 12:45:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:39.005 12:45:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.005 12:45:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:39.005 12:45:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.005 12:45:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:17:39.005 Found net devices under 0000:98:00.0: mlx_0_0 00:17:39.005 12:45:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.005 12:45:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:39.005 12:45:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.005 12:45:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:39.005 12:45:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.005 12:45:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:17:39.005 Found net devices under 0000:98:00.1: mlx_0_1 00:17:39.005 12:45:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.005 12:45:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:39.005 12:45:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:39.005 12:45:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:39.005 12:45:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:39.005 12:45:10 -- nvmf/common.sh@57 -- # uname 00:17:39.005 12:45:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:39.005 12:45:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:39.005 12:45:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:39.005 12:45:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:39.005 12:45:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:39.005 12:45:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:39.005 12:45:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:39.005 12:45:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:39.005 12:45:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:39.005 12:45:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:39.005 12:45:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:39.005 12:45:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:39.005 12:45:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:39.005 12:45:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:39.005 12:45:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:39.005 12:45:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:39.005 12:45:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:39.005 12:45:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.005 12:45:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:39.005 12:45:10 -- nvmf/common.sh@104 -- # continue 2 00:17:39.005 12:45:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:39.005 12:45:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.005 12:45:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.005 12:45:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:39.005 12:45:10 -- nvmf/common.sh@104 -- # continue 2 00:17:39.005 12:45:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:39.005 12:45:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:39.005 12:45:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:39.005 12:45:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:39.005 12:45:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:39.005 12:45:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:39.005 12:45:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:39.005 12:45:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:39.005 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:39.005 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:17:39.005 altname enp152s0f0np0 00:17:39.005 altname ens817f0np0 00:17:39.005 inet 192.168.100.8/24 scope global mlx_0_0 00:17:39.005 valid_lft forever preferred_lft forever 00:17:39.005 12:45:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:39.005 12:45:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:39.005 12:45:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:39.005 12:45:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:39.005 12:45:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:39.005 12:45:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:39.005 12:45:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:39.005 12:45:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:39.005 12:45:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:39.005 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:39.005 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:17:39.005 altname enp152s0f1np1 00:17:39.005 altname ens817f1np1 00:17:39.005 inet 192.168.100.9/24 scope global mlx_0_1 00:17:39.005 valid_lft forever preferred_lft forever 00:17:39.006 12:45:10 -- nvmf/common.sh@410 -- # return 0 00:17:39.006 12:45:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:39.006 12:45:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:39.006 12:45:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:39.006 12:45:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:39.006 12:45:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:39.006 12:45:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:39.006 12:45:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:39.006 12:45:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:39.006 12:45:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:39.006 12:45:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:39.006 12:45:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:39.006 12:45:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.006 12:45:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:39.006 12:45:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:39.006 12:45:10 -- nvmf/common.sh@104 -- # continue 2 00:17:39.006 12:45:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:39.006 12:45:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.006 12:45:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:39.006 12:45:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.006 12:45:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:39.006 12:45:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:39.006 12:45:10 -- nvmf/common.sh@104 -- # continue 2 00:17:39.006 12:45:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:39.006 12:45:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:39.006 12:45:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:39.006 12:45:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:39.006 12:45:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:39.006 12:45:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:39.006 12:45:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:39.006 12:45:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:39.006 12:45:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:39.006 12:45:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:39.006 12:45:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:39.006 12:45:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:39.006 12:45:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:39.006 192.168.100.9' 00:17:39.006 12:45:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:39.006 192.168.100.9' 00:17:39.006 12:45:10 -- nvmf/common.sh@445 -- # head -n 1 00:17:39.006 12:45:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:39.006 12:45:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:39.006 192.168.100.9' 00:17:39.006 12:45:10 -- nvmf/common.sh@446 -- # tail -n +2 00:17:39.006 12:45:10 -- nvmf/common.sh@446 -- # head -n 1 00:17:39.006 12:45:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:39.006 12:45:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:39.006 12:45:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:39.006 12:45:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:39.006 12:45:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:39.006 12:45:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:39.006 12:45:10 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:39.006 12:45:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:39.006 12:45:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:39.006 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 12:45:10 -- nvmf/common.sh@469 -- # nvmfpid=498089 00:17:39.006 12:45:10 -- nvmf/common.sh@470 -- # waitforlisten 498089 00:17:39.006 12:45:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:39.006 12:45:10 -- common/autotest_common.sh@829 -- # '[' -z 498089 ']' 00:17:39.006 12:45:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.006 12:45:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.006 12:45:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.006 12:45:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.006 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 [2024-11-20 12:45:10.991390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:39.006 [2024-11-20 12:45:10.991473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.006 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.006 [2024-11-20 12:45:11.058564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.006 [2024-11-20 12:45:11.134578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:39.006 [2024-11-20 12:45:11.134721] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.006 [2024-11-20 12:45:11.134732] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.006 [2024-11-20 12:45:11.134741] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.006 [2024-11-20 12:45:11.134879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.006 [2024-11-20 12:45:11.135021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.006 [2024-11-20 12:45:11.135124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.006 [2024-11-20 12:45:11.135124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.006 12:45:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.006 12:45:11 -- common/autotest_common.sh@862 -- # return 0 00:17:39.006 12:45:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:39.006 12:45:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.006 12:45:11 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 12:45:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.006 12:45:11 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:39.006 12:45:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 12:45:11 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 [2024-11-20 12:45:11.865020] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9197f0/0x91dce0) succeed. 00:17:39.006 [2024-11-20 12:45:11.878556] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x91ade0/0x95f380) succeed. 00:17:39.006 12:45:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.006 12:45:11 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:39.006 12:45:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 12:45:11 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 Malloc0 00:17:39.006 12:45:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.006 12:45:12 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:39.006 12:45:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 12:45:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 Malloc1 00:17:39.006 12:45:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.006 12:45:12 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:39.006 12:45:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 12:45:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 12:45:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.006 12:45:12 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:39.006 12:45:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 12:45:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 12:45:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.006 12:45:12 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.006 12:45:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 12:45:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 12:45:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.006 12:45:12 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:39.006 12:45:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 12:45:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 [2024-11-20 12:45:12.085968] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:39.006 12:45:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.006 12:45:12 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:39.006 12:45:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 12:45:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 12:45:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.006 12:45:12 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 4420 00:17:39.267 00:17:39.267 Discovery Log Number of Records 2, Generation counter 2 00:17:39.267 =====Discovery Log Entry 0====== 00:17:39.267 trtype: rdma 00:17:39.267 adrfam: ipv4 00:17:39.267 subtype: current discovery subsystem 00:17:39.267 treq: not required 00:17:39.267 portid: 0 00:17:39.267 trsvcid: 4420 00:17:39.267 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:39.267 traddr: 192.168.100.8 00:17:39.267 eflags: explicit discovery connections, duplicate discovery information 00:17:39.267 rdma_prtype: not specified 00:17:39.267 rdma_qptype: connected 00:17:39.267 rdma_cms: rdma-cm 00:17:39.267 rdma_pkey: 0x0000 00:17:39.267 =====Discovery Log Entry 1====== 00:17:39.267 trtype: rdma 00:17:39.267 adrfam: ipv4 00:17:39.267 subtype: nvme subsystem 00:17:39.267 treq: not required 00:17:39.267 portid: 0 00:17:39.267 trsvcid: 4420 00:17:39.268 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:39.268 traddr: 192.168.100.8 00:17:39.268 eflags: none 00:17:39.268 rdma_prtype: not specified 00:17:39.268 rdma_qptype: connected 00:17:39.268 rdma_cms: rdma-cm 00:17:39.268 rdma_pkey: 0x0000 00:17:39.268 12:45:12 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:39.268 12:45:12 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:39.268 12:45:12 -- nvmf/common.sh@510 -- # local dev _ 00:17:39.268 12:45:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.268 12:45:12 -- nvmf/common.sh@509 -- # nvme list 00:17:39.268 12:45:12 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:39.268 12:45:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.268 12:45:12 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:39.268 12:45:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.268 12:45:12 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:39.268 12:45:12 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:40.651 12:45:13 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:40.651 12:45:13 -- common/autotest_common.sh@1187 -- # local i=0 00:17:40.651 12:45:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.651 12:45:13 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:17:40.651 12:45:13 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:17:40.651 12:45:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:43.194 12:45:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:43.194 12:45:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:43.194 12:45:15 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:17:43.194 12:45:15 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:17:43.194 12:45:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.194 12:45:15 -- common/autotest_common.sh@1197 -- # return 0 00:17:43.194 12:45:15 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:43.194 12:45:15 -- nvmf/common.sh@510 -- # local dev _ 00:17:43.194 12:45:15 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.194 12:45:15 -- nvmf/common.sh@509 -- # nvme list 00:17:43.194 12:45:15 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:43.194 12:45:15 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.194 12:45:15 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:43.194 12:45:15 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.194 12:45:15 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:43.194 12:45:15 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:43.194 12:45:15 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.194 12:45:15 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:43.194 12:45:15 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:43.194 12:45:15 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.194 12:45:15 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:43.194 /dev/nvme0n2 ]] 00:17:43.194 12:45:15 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:43.194 12:45:15 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:43.194 12:45:15 -- nvmf/common.sh@510 -- # local dev _ 00:17:43.194 12:45:15 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.194 12:45:15 -- nvmf/common.sh@509 -- # nvme list 00:17:43.194 12:45:15 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:43.194 12:45:15 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.194 12:45:15 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:43.194 12:45:15 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.194 12:45:15 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:43.194 12:45:15 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:43.194 12:45:15 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.194 12:45:15 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:43.194 12:45:15 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:43.194 12:45:15 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.194 12:45:15 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:43.194 12:45:15 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.133 12:45:16 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:44.133 12:45:16 -- common/autotest_common.sh@1208 -- # local i=0 00:17:44.133 12:45:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:44.133 12:45:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.133 12:45:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:44.133 12:45:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.133 12:45:16 -- common/autotest_common.sh@1220 -- # return 0 00:17:44.133 12:45:16 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:44.133 12:45:16 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.133 12:45:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.133 12:45:16 -- common/autotest_common.sh@10 -- # set +x 00:17:44.133 12:45:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.133 12:45:17 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:44.133 12:45:17 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:44.133 12:45:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:44.133 12:45:17 -- nvmf/common.sh@116 -- # sync 00:17:44.133 12:45:17 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:44.133 12:45:17 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:44.133 12:45:17 -- nvmf/common.sh@119 -- # set +e 00:17:44.133 12:45:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:44.133 12:45:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:44.133 rmmod nvme_rdma 00:17:44.133 rmmod nvme_fabrics 00:17:44.133 12:45:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:44.133 12:45:17 -- nvmf/common.sh@123 -- # set -e 00:17:44.133 12:45:17 -- nvmf/common.sh@124 -- # return 0 00:17:44.133 12:45:17 -- nvmf/common.sh@477 -- # '[' -n 498089 ']' 00:17:44.133 12:45:17 -- nvmf/common.sh@478 -- # killprocess 498089 00:17:44.133 12:45:17 -- common/autotest_common.sh@936 -- # '[' -z 498089 ']' 00:17:44.133 12:45:17 -- common/autotest_common.sh@940 -- # kill -0 498089 00:17:44.133 12:45:17 -- common/autotest_common.sh@941 -- # uname 00:17:44.133 12:45:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.133 12:45:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 498089 00:17:44.133 12:45:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:44.133 12:45:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:44.133 12:45:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 498089' 00:17:44.133 killing process with pid 498089 00:17:44.133 12:45:17 -- common/autotest_common.sh@955 -- # kill 498089 00:17:44.133 12:45:17 -- common/autotest_common.sh@960 -- # wait 498089 00:17:44.394 12:45:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:44.394 12:45:17 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:44.394 00:17:44.394 real 0m13.860s 00:17:44.394 user 0m26.885s 00:17:44.394 sys 0m6.044s 00:17:44.394 12:45:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:44.394 12:45:17 -- common/autotest_common.sh@10 -- # set +x 00:17:44.394 ************************************ 00:17:44.394 END TEST nvmf_nvme_cli 00:17:44.394 ************************************ 00:17:44.394 12:45:17 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:44.394 12:45:17 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:44.394 12:45:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:44.394 12:45:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:44.394 12:45:17 -- common/autotest_common.sh@10 -- # set +x 00:17:44.394 ************************************ 00:17:44.394 START TEST nvmf_host_management 00:17:44.394 ************************************ 00:17:44.394 12:45:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:44.394 * Looking for test storage... 00:17:44.394 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:44.394 12:45:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:44.394 12:45:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:44.394 12:45:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:44.655 12:45:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:44.655 12:45:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:44.655 12:45:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:44.655 12:45:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:44.655 12:45:17 -- scripts/common.sh@335 -- # IFS=.-: 00:17:44.655 12:45:17 -- scripts/common.sh@335 -- # read -ra ver1 00:17:44.655 12:45:17 -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.655 12:45:17 -- scripts/common.sh@336 -- # read -ra ver2 00:17:44.655 12:45:17 -- scripts/common.sh@337 -- # local 'op=<' 00:17:44.655 12:45:17 -- scripts/common.sh@339 -- # ver1_l=2 00:17:44.655 12:45:17 -- scripts/common.sh@340 -- # ver2_l=1 00:17:44.655 12:45:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:44.655 12:45:17 -- scripts/common.sh@343 -- # case "$op" in 00:17:44.655 12:45:17 -- scripts/common.sh@344 -- # : 1 00:17:44.655 12:45:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:44.655 12:45:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.655 12:45:17 -- scripts/common.sh@364 -- # decimal 1 00:17:44.655 12:45:17 -- scripts/common.sh@352 -- # local d=1 00:17:44.655 12:45:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.655 12:45:17 -- scripts/common.sh@354 -- # echo 1 00:17:44.655 12:45:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:44.655 12:45:17 -- scripts/common.sh@365 -- # decimal 2 00:17:44.655 12:45:17 -- scripts/common.sh@352 -- # local d=2 00:17:44.655 12:45:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.655 12:45:17 -- scripts/common.sh@354 -- # echo 2 00:17:44.655 12:45:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:44.655 12:45:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:44.655 12:45:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:44.655 12:45:17 -- scripts/common.sh@367 -- # return 0 00:17:44.655 12:45:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.655 12:45:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.655 --rc genhtml_branch_coverage=1 00:17:44.655 --rc genhtml_function_coverage=1 00:17:44.655 --rc genhtml_legend=1 00:17:44.655 --rc geninfo_all_blocks=1 00:17:44.655 --rc geninfo_unexecuted_blocks=1 00:17:44.655 00:17:44.655 ' 00:17:44.655 12:45:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.655 --rc genhtml_branch_coverage=1 00:17:44.655 --rc genhtml_function_coverage=1 00:17:44.655 --rc genhtml_legend=1 00:17:44.655 --rc geninfo_all_blocks=1 00:17:44.655 --rc geninfo_unexecuted_blocks=1 00:17:44.655 00:17:44.655 ' 00:17:44.655 12:45:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.655 --rc genhtml_branch_coverage=1 00:17:44.655 --rc genhtml_function_coverage=1 00:17:44.655 --rc genhtml_legend=1 00:17:44.655 --rc geninfo_all_blocks=1 00:17:44.655 --rc geninfo_unexecuted_blocks=1 00:17:44.655 00:17:44.655 ' 00:17:44.655 12:45:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.655 --rc genhtml_branch_coverage=1 00:17:44.655 --rc genhtml_function_coverage=1 00:17:44.655 --rc genhtml_legend=1 00:17:44.655 --rc geninfo_all_blocks=1 00:17:44.655 --rc geninfo_unexecuted_blocks=1 00:17:44.655 00:17:44.655 ' 00:17:44.655 12:45:17 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.655 12:45:17 -- nvmf/common.sh@7 -- # uname -s 00:17:44.655 12:45:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.655 12:45:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.655 12:45:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.655 12:45:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.655 12:45:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.655 12:45:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.655 12:45:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.655 12:45:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.655 12:45:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.655 12:45:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.655 12:45:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:44.655 12:45:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:44.655 12:45:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.655 12:45:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.655 12:45:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.655 12:45:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:44.655 12:45:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.655 12:45:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.655 12:45:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.656 12:45:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.656 12:45:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.656 12:45:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.656 12:45:17 -- paths/export.sh@5 -- # export PATH 00:17:44.656 12:45:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.656 12:45:17 -- nvmf/common.sh@46 -- # : 0 00:17:44.656 12:45:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:44.656 12:45:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:44.656 12:45:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:44.656 12:45:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.656 12:45:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.656 12:45:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:44.656 12:45:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:44.656 12:45:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:44.656 12:45:17 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:44.656 12:45:17 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:44.656 12:45:17 -- target/host_management.sh@104 -- # nvmftestinit 00:17:44.656 12:45:17 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:44.656 12:45:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.656 12:45:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:44.656 12:45:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:44.656 12:45:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:44.656 12:45:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.656 12:45:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.656 12:45:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.656 12:45:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:44.656 12:45:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:44.656 12:45:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:44.656 12:45:17 -- common/autotest_common.sh@10 -- # set +x 00:17:52.799 12:45:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:52.799 12:45:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:52.799 12:45:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:52.799 12:45:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:52.799 12:45:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:52.799 12:45:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:52.799 12:45:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:52.799 12:45:24 -- nvmf/common.sh@294 -- # net_devs=() 00:17:52.799 12:45:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:52.799 12:45:24 -- nvmf/common.sh@295 -- # e810=() 00:17:52.799 12:45:24 -- nvmf/common.sh@295 -- # local -ga e810 00:17:52.799 12:45:24 -- nvmf/common.sh@296 -- # x722=() 00:17:52.799 12:45:24 -- nvmf/common.sh@296 -- # local -ga x722 00:17:52.799 12:45:24 -- nvmf/common.sh@297 -- # mlx=() 00:17:52.799 12:45:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:52.799 12:45:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.799 12:45:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:52.799 12:45:24 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:52.799 12:45:24 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:52.799 12:45:24 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:52.799 12:45:24 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:52.799 12:45:24 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:52.800 12:45:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:52.800 12:45:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:17:52.800 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:17:52.800 12:45:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:52.800 12:45:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:17:52.800 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:17:52.800 12:45:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:52.800 12:45:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:52.800 12:45:24 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.800 12:45:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:52.800 12:45:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.800 12:45:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:17:52.800 Found net devices under 0000:98:00.0: mlx_0_0 00:17:52.800 12:45:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.800 12:45:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.800 12:45:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:52.800 12:45:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.800 12:45:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:17:52.800 Found net devices under 0000:98:00.1: mlx_0_1 00:17:52.800 12:45:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.800 12:45:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:52.800 12:45:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:52.800 12:45:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:52.800 12:45:24 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:52.800 12:45:24 -- nvmf/common.sh@57 -- # uname 00:17:52.800 12:45:24 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:52.800 12:45:24 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:52.800 12:45:24 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:52.800 12:45:24 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:52.800 12:45:24 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:52.800 12:45:24 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:52.800 12:45:24 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:52.800 12:45:24 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:52.800 12:45:24 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:52.800 12:45:24 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:52.800 12:45:24 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:52.800 12:45:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:52.800 12:45:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:52.800 12:45:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:52.800 12:45:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:52.800 12:45:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:52.800 12:45:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:52.800 12:45:24 -- nvmf/common.sh@104 -- # continue 2 00:17:52.800 12:45:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:52.800 12:45:24 -- nvmf/common.sh@104 -- # continue 2 00:17:52.800 12:45:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:52.800 12:45:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:52.800 12:45:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:52.800 12:45:24 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:52.800 12:45:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:52.800 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:52.800 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:17:52.800 altname enp152s0f0np0 00:17:52.800 altname ens817f0np0 00:17:52.800 inet 192.168.100.8/24 scope global mlx_0_0 00:17:52.800 valid_lft forever preferred_lft forever 00:17:52.800 12:45:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:52.800 12:45:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:52.800 12:45:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:52.800 12:45:24 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:52.800 12:45:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:52.800 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:52.800 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:17:52.800 altname enp152s0f1np1 00:17:52.800 altname ens817f1np1 00:17:52.800 inet 192.168.100.9/24 scope global mlx_0_1 00:17:52.800 valid_lft forever preferred_lft forever 00:17:52.800 12:45:24 -- nvmf/common.sh@410 -- # return 0 00:17:52.800 12:45:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:52.800 12:45:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:52.800 12:45:24 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:52.800 12:45:24 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:52.800 12:45:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:52.800 12:45:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:52.800 12:45:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:52.800 12:45:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:52.800 12:45:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:52.800 12:45:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:52.800 12:45:24 -- nvmf/common.sh@104 -- # continue 2 00:17:52.800 12:45:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:52.800 12:45:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:52.800 12:45:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:52.800 12:45:24 -- nvmf/common.sh@104 -- # continue 2 00:17:52.800 12:45:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:52.800 12:45:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:52.800 12:45:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:52.800 12:45:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:52.800 12:45:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:52.800 12:45:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:52.800 12:45:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:52.800 12:45:24 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:52.800 192.168.100.9' 00:17:52.800 12:45:24 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:52.800 192.168.100.9' 00:17:52.800 12:45:24 -- nvmf/common.sh@445 -- # head -n 1 00:17:52.800 12:45:24 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:52.800 12:45:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:52.800 192.168.100.9' 00:17:52.800 12:45:24 -- nvmf/common.sh@446 -- # tail -n +2 00:17:52.800 12:45:24 -- nvmf/common.sh@446 -- # head -n 1 00:17:52.800 12:45:24 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:52.800 12:45:24 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:52.800 12:45:24 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:52.800 12:45:24 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:52.800 12:45:24 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:52.800 12:45:24 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:52.800 12:45:24 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:52.800 12:45:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:52.800 12:45:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:52.800 12:45:24 -- common/autotest_common.sh@10 -- # set +x 00:17:52.800 ************************************ 00:17:52.800 START TEST nvmf_host_management 00:17:52.800 ************************************ 00:17:52.800 12:45:24 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:17:52.800 12:45:24 -- target/host_management.sh@69 -- # starttarget 00:17:52.801 12:45:24 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:52.801 12:45:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:52.801 12:45:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.801 12:45:24 -- common/autotest_common.sh@10 -- # set +x 00:17:52.801 12:45:24 -- nvmf/common.sh@469 -- # nvmfpid=503293 00:17:52.801 12:45:24 -- nvmf/common.sh@470 -- # waitforlisten 503293 00:17:52.801 12:45:24 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:52.801 12:45:24 -- common/autotest_common.sh@829 -- # '[' -z 503293 ']' 00:17:52.801 12:45:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.801 12:45:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.801 12:45:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.801 12:45:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.801 12:45:24 -- common/autotest_common.sh@10 -- # set +x 00:17:52.801 [2024-11-20 12:45:24.913214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:52.801 [2024-11-20 12:45:24.913266] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.801 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.801 [2024-11-20 12:45:24.991928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.801 [2024-11-20 12:45:25.067363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:52.801 [2024-11-20 12:45:25.067515] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.801 [2024-11-20 12:45:25.067525] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.801 [2024-11-20 12:45:25.067533] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.801 [2024-11-20 12:45:25.067675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.801 [2024-11-20 12:45:25.067841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.801 [2024-11-20 12:45:25.068029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:52.801 [2024-11-20 12:45:25.068073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.801 12:45:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.801 12:45:25 -- common/autotest_common.sh@862 -- # return 0 00:17:52.801 12:45:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:52.801 12:45:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.801 12:45:25 -- common/autotest_common.sh@10 -- # set +x 00:17:52.801 12:45:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.801 12:45:25 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:52.801 12:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.801 12:45:25 -- common/autotest_common.sh@10 -- # set +x 00:17:52.801 [2024-11-20 12:45:25.771555] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd9bac0/0xd9ffb0) succeed. 00:17:52.801 [2024-11-20 12:45:25.786399] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd9d0b0/0xde1650) succeed. 00:17:53.062 12:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.062 12:45:25 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:53.062 12:45:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.062 12:45:25 -- common/autotest_common.sh@10 -- # set +x 00:17:53.062 12:45:25 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:53.062 12:45:25 -- target/host_management.sh@23 -- # cat 00:17:53.062 12:45:25 -- target/host_management.sh@30 -- # rpc_cmd 00:17:53.062 12:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.062 12:45:25 -- common/autotest_common.sh@10 -- # set +x 00:17:53.062 Malloc0 00:17:53.062 [2024-11-20 12:45:25.965244] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:53.062 12:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.062 12:45:25 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:53.062 12:45:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.062 12:45:25 -- common/autotest_common.sh@10 -- # set +x 00:17:53.062 12:45:26 -- target/host_management.sh@73 -- # perfpid=503512 00:17:53.062 12:45:26 -- target/host_management.sh@74 -- # waitforlisten 503512 /var/tmp/bdevperf.sock 00:17:53.062 12:45:26 -- common/autotest_common.sh@829 -- # '[' -z 503512 ']' 00:17:53.062 12:45:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.062 12:45:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.062 12:45:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.062 12:45:26 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:53.062 12:45:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.062 12:45:26 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:53.062 12:45:26 -- common/autotest_common.sh@10 -- # set +x 00:17:53.062 12:45:26 -- nvmf/common.sh@520 -- # config=() 00:17:53.062 12:45:26 -- nvmf/common.sh@520 -- # local subsystem config 00:17:53.062 12:45:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:53.062 12:45:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:53.062 { 00:17:53.062 "params": { 00:17:53.062 "name": "Nvme$subsystem", 00:17:53.062 "trtype": "$TEST_TRANSPORT", 00:17:53.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.062 "adrfam": "ipv4", 00:17:53.062 "trsvcid": "$NVMF_PORT", 00:17:53.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.062 "hdgst": ${hdgst:-false}, 00:17:53.062 "ddgst": ${ddgst:-false} 00:17:53.062 }, 00:17:53.062 "method": "bdev_nvme_attach_controller" 00:17:53.062 } 00:17:53.062 EOF 00:17:53.062 )") 00:17:53.062 12:45:26 -- nvmf/common.sh@542 -- # cat 00:17:53.062 12:45:26 -- nvmf/common.sh@544 -- # jq . 00:17:53.062 12:45:26 -- nvmf/common.sh@545 -- # IFS=, 00:17:53.062 12:45:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:53.062 "params": { 00:17:53.062 "name": "Nvme0", 00:17:53.062 "trtype": "rdma", 00:17:53.062 "traddr": "192.168.100.8", 00:17:53.062 "adrfam": "ipv4", 00:17:53.062 "trsvcid": "4420", 00:17:53.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:53.062 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:53.062 "hdgst": false, 00:17:53.062 "ddgst": false 00:17:53.062 }, 00:17:53.062 "method": "bdev_nvme_attach_controller" 00:17:53.062 }' 00:17:53.062 [2024-11-20 12:45:26.059862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:53.062 [2024-11-20 12:45:26.059914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid503512 ] 00:17:53.062 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.062 [2024-11-20 12:45:26.120749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.323 [2024-11-20 12:45:26.183499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.323 Running I/O for 10 seconds... 00:17:53.895 12:45:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.895 12:45:26 -- common/autotest_common.sh@862 -- # return 0 00:17:53.895 12:45:26 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:53.895 12:45:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.895 12:45:26 -- common/autotest_common.sh@10 -- # set +x 00:17:53.895 12:45:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.895 12:45:26 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.895 12:45:26 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:53.895 12:45:26 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:53.895 12:45:26 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:53.895 12:45:26 -- target/host_management.sh@52 -- # local ret=1 00:17:53.895 12:45:26 -- target/host_management.sh@53 -- # local i 00:17:53.895 12:45:26 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:53.895 12:45:26 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:53.895 12:45:26 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:53.895 12:45:26 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:53.895 12:45:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.895 12:45:26 -- common/autotest_common.sh@10 -- # set +x 00:17:53.895 12:45:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.896 12:45:26 -- target/host_management.sh@55 -- # read_io_count=2306 00:17:53.896 12:45:26 -- target/host_management.sh@58 -- # '[' 2306 -ge 100 ']' 00:17:53.896 12:45:26 -- target/host_management.sh@59 -- # ret=0 00:17:53.896 12:45:26 -- target/host_management.sh@60 -- # break 00:17:53.896 12:45:26 -- target/host_management.sh@64 -- # return 0 00:17:53.896 12:45:26 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:53.896 12:45:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.896 12:45:26 -- common/autotest_common.sh@10 -- # set +x 00:17:53.896 12:45:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.896 12:45:26 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:53.896 12:45:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.896 12:45:26 -- common/autotest_common.sh@10 -- # set +x 00:17:53.896 12:45:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.896 12:45:26 -- target/host_management.sh@87 -- # sleep 1 00:17:54.840 [2024-11-20 12:45:27.934622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182600 00:17:54.840 [2024-11-20 12:45:27.934661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182400 00:17:54.840 [2024-11-20 12:45:27.934689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:17:54.840 [2024-11-20 12:45:27.934707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:17:54.840 [2024-11-20 12:45:27.934725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182400 00:17:54.840 [2024-11-20 12:45:27.934742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182400 00:17:54.840 [2024-11-20 12:45:27.934759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182500 00:17:54.840 [2024-11-20 12:45:27.934777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:17:54.840 [2024-11-20 12:45:27.934794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182400 00:17:54.840 [2024-11-20 12:45:27.934812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182600 00:17:54.840 [2024-11-20 12:45:27.934833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182500 00:17:54.840 [2024-11-20 12:45:27.934851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.840 [2024-11-20 12:45:27.934861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182000 00:17:54.840 [2024-11-20 12:45:27.934868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.934878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182400 00:17:54.841 [2024-11-20 12:45:27.934886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.934896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182700 00:17:54.841 [2024-11-20 12:45:27.934903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.934913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182000 00:17:54.841 [2024-11-20 12:45:27.934921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.934931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182700 00:17:54.841 [2024-11-20 12:45:27.934938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.934948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:17:54.841 [2024-11-20 12:45:27.934955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.934964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182500 00:17:54.841 [2024-11-20 12:45:27.934972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.934986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:17:54.841 [2024-11-20 12:45:27.934994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:17:54.841 [2024-11-20 12:45:27.935011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182000 00:17:54.841 [2024-11-20 12:45:27.935030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182700 00:17:54.841 [2024-11-20 12:45:27.935047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182500 00:17:54.841 [2024-11-20 12:45:27.935064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182400 00:17:54.841 [2024-11-20 12:45:27.935082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:17:54.841 [2024-11-20 12:45:27.935100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182400 00:17:54.841 [2024-11-20 12:45:27.935117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182600 00:17:54.841 [2024-11-20 12:45:27.935134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182500 00:17:54.841 [2024-11-20 12:45:27.935151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182500 00:17:54.841 [2024-11-20 12:45:27.935171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182400 00:17:54.841 [2024-11-20 12:45:27.935188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:17:54.841 [2024-11-20 12:45:27.935205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182500 00:17:54.841 [2024-11-20 12:45:27.935224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:17:54.841 [2024-11-20 12:45:27.935241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182600 00:17:54.841 [2024-11-20 12:45:27.935259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:17:54.841 [2024-11-20 12:45:27.935275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182700 00:17:54.841 [2024-11-20 12:45:27.935292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182700 00:17:54.841 [2024-11-20 12:45:27.935309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:17:54.841 [2024-11-20 12:45:27.935326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182500 00:17:54.841 [2024-11-20 12:45:27.935343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182000 00:17:54.841 [2024-11-20 12:45:27.935359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182400 00:17:54.841 [2024-11-20 12:45:27.935376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182400 00:17:54.841 [2024-11-20 12:45:27.935394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:17:54.841 [2024-11-20 12:45:27.935416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012216000 len:0x10000 key:0x182300 00:17:54.841 [2024-11-20 12:45:27.935433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121f5000 len:0x10000 key:0x182300 00:17:54.841 [2024-11-20 12:45:27.935450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001271d000 len:0x10000 key:0x182300 00:17:54.841 [2024-11-20 12:45:27.935466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126fc000 len:0x10000 key:0x182300 00:17:54.841 [2024-11-20 12:45:27.935483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126db000 len:0x10000 key:0x182300 00:17:54.841 [2024-11-20 12:45:27.935500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.841 [2024-11-20 12:45:27.935509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126ba000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012699000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012678000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012657000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012636000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012615000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125f4000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125d3000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125b2000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012591000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:17:54.842 [2024-11-20 12:45:27.935688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:17:54.842 [2024-11-20 12:45:27.935704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182600 00:17:54.842 [2024-11-20 12:45:27.935723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012570000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001296f000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.935766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001294e000 len:0x10000 key:0x182300 00:17:54.842 [2024-11-20 12:45:27.935774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:90f5b000 sqhd:5310 p:0 m:0 dnr:0 00:17:54.842 [2024-11-20 12:45:27.938082] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:17:54.842 [2024-11-20 12:45:27.939289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:54.842 task offset: 62976 on job bdev=Nvme0n1 fails 00:17:54.842 00:17:54.842 Latency(us) 00:17:54.842 [2024-11-20T11:45:27.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.842 [2024-11-20T11:45:27.950Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:54.842 [2024-11-20T11:45:27.950Z] Job: Nvme0n1 ended in about 1.58 seconds with error 00:17:54.842 Verification LBA range: start 0x0 length 0x400 00:17:54.842 Nvme0n1 : 1.58 1598.67 99.92 40.57 0.00 38767.84 4205.23 1013623.47 00:17:54.842 [2024-11-20T11:45:27.950Z] =================================================================================================================== 00:17:54.842 [2024-11-20T11:45:27.950Z] Total : 1598.67 99.92 40.57 0.00 38767.84 4205.23 1013623.47 00:17:54.842 [2024-11-20 12:45:27.941296] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:55.103 12:45:27 -- target/host_management.sh@91 -- # kill -9 503512 00:17:55.103 12:45:27 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:55.103 12:45:27 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:55.103 12:45:27 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:55.103 12:45:27 -- nvmf/common.sh@520 -- # config=() 00:17:55.103 12:45:27 -- nvmf/common.sh@520 -- # local subsystem config 00:17:55.103 12:45:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:55.103 12:45:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:55.103 { 00:17:55.103 "params": { 00:17:55.103 "name": "Nvme$subsystem", 00:17:55.103 "trtype": "$TEST_TRANSPORT", 00:17:55.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.103 "adrfam": "ipv4", 00:17:55.103 "trsvcid": "$NVMF_PORT", 00:17:55.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.103 "hdgst": ${hdgst:-false}, 00:17:55.103 "ddgst": ${ddgst:-false} 00:17:55.103 }, 00:17:55.103 "method": "bdev_nvme_attach_controller" 00:17:55.103 } 00:17:55.103 EOF 00:17:55.103 )") 00:17:55.103 12:45:27 -- nvmf/common.sh@542 -- # cat 00:17:55.103 12:45:27 -- nvmf/common.sh@544 -- # jq . 00:17:55.103 12:45:27 -- nvmf/common.sh@545 -- # IFS=, 00:17:55.103 12:45:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:55.103 "params": { 00:17:55.103 "name": "Nvme0", 00:17:55.103 "trtype": "rdma", 00:17:55.103 "traddr": "192.168.100.8", 00:17:55.103 "adrfam": "ipv4", 00:17:55.103 "trsvcid": "4420", 00:17:55.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:55.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:55.103 "hdgst": false, 00:17:55.103 "ddgst": false 00:17:55.103 }, 00:17:55.103 "method": "bdev_nvme_attach_controller" 00:17:55.103 }' 00:17:55.103 [2024-11-20 12:45:27.999543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:55.103 [2024-11-20 12:45:27.999589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid503967 ] 00:17:55.103 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.103 [2024-11-20 12:45:28.060721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.103 [2024-11-20 12:45:28.123321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.363 Running I/O for 1 seconds... 00:17:56.304 00:17:56.304 Latency(us) 00:17:56.304 [2024-11-20T11:45:29.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.304 [2024-11-20T11:45:29.412Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:56.304 Verification LBA range: start 0x0 length 0x400 00:17:56.304 Nvme0n1 : 1.01 4755.20 297.20 0.00 0.00 13232.81 901.12 26323.63 00:17:56.304 [2024-11-20T11:45:29.412Z] =================================================================================================================== 00:17:56.304 [2024-11-20T11:45:29.412Z] Total : 4755.20 297.20 0.00 0.00 13232.81 901.12 26323.63 00:17:56.565 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 503512 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:56.565 12:45:29 -- target/host_management.sh@101 -- # stoptarget 00:17:56.565 12:45:29 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:56.565 12:45:29 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:56.565 12:45:29 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:56.565 12:45:29 -- target/host_management.sh@40 -- # nvmftestfini 00:17:56.565 12:45:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:56.565 12:45:29 -- nvmf/common.sh@116 -- # sync 00:17:56.565 12:45:29 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:56.565 12:45:29 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:56.565 12:45:29 -- nvmf/common.sh@119 -- # set +e 00:17:56.565 12:45:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:56.565 12:45:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:56.565 rmmod nvme_rdma 00:17:56.565 rmmod nvme_fabrics 00:17:56.565 12:45:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:56.565 12:45:29 -- nvmf/common.sh@123 -- # set -e 00:17:56.565 12:45:29 -- nvmf/common.sh@124 -- # return 0 00:17:56.565 12:45:29 -- nvmf/common.sh@477 -- # '[' -n 503293 ']' 00:17:56.565 12:45:29 -- nvmf/common.sh@478 -- # killprocess 503293 00:17:56.565 12:45:29 -- common/autotest_common.sh@936 -- # '[' -z 503293 ']' 00:17:56.565 12:45:29 -- common/autotest_common.sh@940 -- # kill -0 503293 00:17:56.565 12:45:29 -- common/autotest_common.sh@941 -- # uname 00:17:56.565 12:45:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.565 12:45:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 503293 00:17:56.565 12:45:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:56.565 12:45:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:56.565 12:45:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 503293' 00:17:56.565 killing process with pid 503293 00:17:56.565 12:45:29 -- common/autotest_common.sh@955 -- # kill 503293 00:17:56.565 12:45:29 -- common/autotest_common.sh@960 -- # wait 503293 00:17:56.826 [2024-11-20 12:45:29.771085] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:56.826 12:45:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:56.826 12:45:29 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:56.826 00:17:56.826 real 0m4.929s 00:17:56.826 user 0m22.268s 00:17:56.826 sys 0m0.811s 00:17:56.826 12:45:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:56.826 12:45:29 -- common/autotest_common.sh@10 -- # set +x 00:17:56.826 ************************************ 00:17:56.826 END TEST nvmf_host_management 00:17:56.826 ************************************ 00:17:56.826 12:45:29 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:56.826 00:17:56.826 real 0m12.416s 00:17:56.826 user 0m24.420s 00:17:56.826 sys 0m6.273s 00:17:56.826 12:45:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:56.826 12:45:29 -- common/autotest_common.sh@10 -- # set +x 00:17:56.826 ************************************ 00:17:56.826 END TEST nvmf_host_management 00:17:56.826 ************************************ 00:17:56.826 12:45:29 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:56.826 12:45:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:56.826 12:45:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:56.826 12:45:29 -- common/autotest_common.sh@10 -- # set +x 00:17:56.826 ************************************ 00:17:56.826 START TEST nvmf_lvol 00:17:56.826 ************************************ 00:17:56.826 12:45:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:57.088 * Looking for test storage... 00:17:57.088 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:57.088 12:45:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:57.088 12:45:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:57.088 12:45:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:57.088 12:45:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:57.088 12:45:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:57.088 12:45:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:57.088 12:45:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:57.088 12:45:30 -- scripts/common.sh@335 -- # IFS=.-: 00:17:57.088 12:45:30 -- scripts/common.sh@335 -- # read -ra ver1 00:17:57.088 12:45:30 -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.088 12:45:30 -- scripts/common.sh@336 -- # read -ra ver2 00:17:57.088 12:45:30 -- scripts/common.sh@337 -- # local 'op=<' 00:17:57.088 12:45:30 -- scripts/common.sh@339 -- # ver1_l=2 00:17:57.088 12:45:30 -- scripts/common.sh@340 -- # ver2_l=1 00:17:57.088 12:45:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:57.088 12:45:30 -- scripts/common.sh@343 -- # case "$op" in 00:17:57.088 12:45:30 -- scripts/common.sh@344 -- # : 1 00:17:57.088 12:45:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:57.088 12:45:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.088 12:45:30 -- scripts/common.sh@364 -- # decimal 1 00:17:57.088 12:45:30 -- scripts/common.sh@352 -- # local d=1 00:17:57.088 12:45:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.088 12:45:30 -- scripts/common.sh@354 -- # echo 1 00:17:57.088 12:45:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:57.088 12:45:30 -- scripts/common.sh@365 -- # decimal 2 00:17:57.088 12:45:30 -- scripts/common.sh@352 -- # local d=2 00:17:57.088 12:45:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.088 12:45:30 -- scripts/common.sh@354 -- # echo 2 00:17:57.088 12:45:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:57.088 12:45:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:57.089 12:45:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:57.089 12:45:30 -- scripts/common.sh@367 -- # return 0 00:17:57.089 12:45:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.089 12:45:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.089 --rc genhtml_branch_coverage=1 00:17:57.089 --rc genhtml_function_coverage=1 00:17:57.089 --rc genhtml_legend=1 00:17:57.089 --rc geninfo_all_blocks=1 00:17:57.089 --rc geninfo_unexecuted_blocks=1 00:17:57.089 00:17:57.089 ' 00:17:57.089 12:45:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.089 --rc genhtml_branch_coverage=1 00:17:57.089 --rc genhtml_function_coverage=1 00:17:57.089 --rc genhtml_legend=1 00:17:57.089 --rc geninfo_all_blocks=1 00:17:57.089 --rc geninfo_unexecuted_blocks=1 00:17:57.089 00:17:57.089 ' 00:17:57.089 12:45:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.089 --rc genhtml_branch_coverage=1 00:17:57.089 --rc genhtml_function_coverage=1 00:17:57.089 --rc genhtml_legend=1 00:17:57.089 --rc geninfo_all_blocks=1 00:17:57.089 --rc geninfo_unexecuted_blocks=1 00:17:57.089 00:17:57.089 ' 00:17:57.089 12:45:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.089 --rc genhtml_branch_coverage=1 00:17:57.089 --rc genhtml_function_coverage=1 00:17:57.089 --rc genhtml_legend=1 00:17:57.089 --rc geninfo_all_blocks=1 00:17:57.089 --rc geninfo_unexecuted_blocks=1 00:17:57.089 00:17:57.089 ' 00:17:57.089 12:45:30 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.089 12:45:30 -- nvmf/common.sh@7 -- # uname -s 00:17:57.089 12:45:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.089 12:45:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.089 12:45:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.089 12:45:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.089 12:45:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.089 12:45:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.089 12:45:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.089 12:45:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.089 12:45:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.089 12:45:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.089 12:45:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:57.089 12:45:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:57.089 12:45:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.089 12:45:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.089 12:45:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.089 12:45:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:57.089 12:45:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.089 12:45:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.089 12:45:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.089 12:45:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.089 12:45:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.089 12:45:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.089 12:45:30 -- paths/export.sh@5 -- # export PATH 00:17:57.089 12:45:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.089 12:45:30 -- nvmf/common.sh@46 -- # : 0 00:17:57.089 12:45:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:57.089 12:45:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:57.089 12:45:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:57.089 12:45:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.089 12:45:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.089 12:45:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:57.089 12:45:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:57.089 12:45:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:57.089 12:45:30 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.089 12:45:30 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.089 12:45:30 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:57.089 12:45:30 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:57.089 12:45:30 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:57.089 12:45:30 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:57.089 12:45:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:57.089 12:45:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.089 12:45:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:57.089 12:45:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:57.089 12:45:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:57.089 12:45:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.089 12:45:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.089 12:45:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.089 12:45:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:57.089 12:45:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:57.089 12:45:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:57.089 12:45:30 -- common/autotest_common.sh@10 -- # set +x 00:18:05.234 12:45:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:05.234 12:45:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:05.234 12:45:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:05.234 12:45:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:05.234 12:45:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:05.234 12:45:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:05.234 12:45:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:05.234 12:45:36 -- nvmf/common.sh@294 -- # net_devs=() 00:18:05.235 12:45:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:05.235 12:45:36 -- nvmf/common.sh@295 -- # e810=() 00:18:05.235 12:45:36 -- nvmf/common.sh@295 -- # local -ga e810 00:18:05.235 12:45:36 -- nvmf/common.sh@296 -- # x722=() 00:18:05.235 12:45:36 -- nvmf/common.sh@296 -- # local -ga x722 00:18:05.235 12:45:36 -- nvmf/common.sh@297 -- # mlx=() 00:18:05.235 12:45:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:05.235 12:45:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.235 12:45:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:05.235 12:45:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:05.235 12:45:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:05.235 12:45:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:05.235 12:45:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:05.235 12:45:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:05.235 12:45:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:18:05.235 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:18:05.235 12:45:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:05.235 12:45:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:05.235 12:45:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:18:05.235 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:18:05.235 12:45:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:05.235 12:45:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:05.235 12:45:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:05.235 12:45:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.235 12:45:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:05.235 12:45:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.235 12:45:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:18:05.235 Found net devices under 0000:98:00.0: mlx_0_0 00:18:05.235 12:45:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.235 12:45:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:05.235 12:45:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.235 12:45:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:05.235 12:45:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.235 12:45:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:18:05.235 Found net devices under 0000:98:00.1: mlx_0_1 00:18:05.235 12:45:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.235 12:45:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:05.235 12:45:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:05.235 12:45:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:05.235 12:45:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:05.235 12:45:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:05.235 12:45:36 -- nvmf/common.sh@57 -- # uname 00:18:05.235 12:45:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:05.235 12:45:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:05.235 12:45:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:05.235 12:45:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:05.235 12:45:36 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:05.235 12:45:36 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:05.235 12:45:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:05.235 12:45:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:05.235 12:45:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:05.235 12:45:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:05.235 12:45:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:05.235 12:45:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:05.235 12:45:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:05.235 12:45:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:05.235 12:45:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:05.235 12:45:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:05.235 12:45:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:05.235 12:45:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.235 12:45:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:05.235 12:45:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:05.235 12:45:37 -- nvmf/common.sh@104 -- # continue 2 00:18:05.235 12:45:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:05.235 12:45:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.235 12:45:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:05.235 12:45:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.235 12:45:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:05.235 12:45:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:05.235 12:45:37 -- nvmf/common.sh@104 -- # continue 2 00:18:05.235 12:45:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:05.235 12:45:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:05.235 12:45:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:05.235 12:45:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:05.235 12:45:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:05.235 12:45:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:05.235 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:05.235 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:18:05.235 altname enp152s0f0np0 00:18:05.235 altname ens817f0np0 00:18:05.235 inet 192.168.100.8/24 scope global mlx_0_0 00:18:05.235 valid_lft forever preferred_lft forever 00:18:05.235 12:45:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:05.235 12:45:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:05.235 12:45:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:05.235 12:45:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:05.235 12:45:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:05.235 12:45:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:05.235 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:05.235 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:18:05.235 altname enp152s0f1np1 00:18:05.235 altname ens817f1np1 00:18:05.235 inet 192.168.100.9/24 scope global mlx_0_1 00:18:05.235 valid_lft forever preferred_lft forever 00:18:05.235 12:45:37 -- nvmf/common.sh@410 -- # return 0 00:18:05.235 12:45:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:05.235 12:45:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:05.235 12:45:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:05.235 12:45:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:05.235 12:45:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:05.235 12:45:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:05.235 12:45:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:05.235 12:45:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:05.235 12:45:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:05.235 12:45:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:05.235 12:45:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:05.235 12:45:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.235 12:45:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:05.235 12:45:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:05.235 12:45:37 -- nvmf/common.sh@104 -- # continue 2 00:18:05.235 12:45:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:05.235 12:45:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.235 12:45:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:05.235 12:45:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.235 12:45:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:05.235 12:45:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:05.235 12:45:37 -- nvmf/common.sh@104 -- # continue 2 00:18:05.235 12:45:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:05.235 12:45:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:05.235 12:45:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:05.235 12:45:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:05.235 12:45:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:05.235 12:45:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:05.235 12:45:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:05.235 12:45:37 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:05.235 192.168.100.9' 00:18:05.235 12:45:37 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:05.235 192.168.100.9' 00:18:05.235 12:45:37 -- nvmf/common.sh@445 -- # head -n 1 00:18:05.235 12:45:37 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:05.235 12:45:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:05.235 192.168.100.9' 00:18:05.235 12:45:37 -- nvmf/common.sh@446 -- # tail -n +2 00:18:05.235 12:45:37 -- nvmf/common.sh@446 -- # head -n 1 00:18:05.235 12:45:37 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:05.235 12:45:37 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:05.235 12:45:37 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:05.235 12:45:37 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:05.235 12:45:37 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:05.235 12:45:37 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:05.235 12:45:37 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:05.235 12:45:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:05.235 12:45:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:05.235 12:45:37 -- common/autotest_common.sh@10 -- # set +x 00:18:05.235 12:45:37 -- nvmf/common.sh@469 -- # nvmfpid=508090 00:18:05.235 12:45:37 -- nvmf/common.sh@470 -- # waitforlisten 508090 00:18:05.235 12:45:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:05.235 12:45:37 -- common/autotest_common.sh@829 -- # '[' -z 508090 ']' 00:18:05.235 12:45:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.235 12:45:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.235 12:45:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.235 12:45:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.235 12:45:37 -- common/autotest_common.sh@10 -- # set +x 00:18:05.235 [2024-11-20 12:45:37.221492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:05.235 [2024-11-20 12:45:37.221544] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.235 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.235 [2024-11-20 12:45:37.283402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:05.235 [2024-11-20 12:45:37.346125] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:05.235 [2024-11-20 12:45:37.346250] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.235 [2024-11-20 12:45:37.346258] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.235 [2024-11-20 12:45:37.346265] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.235 [2024-11-20 12:45:37.346412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.235 [2024-11-20 12:45:37.346524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.235 [2024-11-20 12:45:37.346527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.235 12:45:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.235 12:45:38 -- common/autotest_common.sh@862 -- # return 0 00:18:05.235 12:45:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:05.235 12:45:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.235 12:45:38 -- common/autotest_common.sh@10 -- # set +x 00:18:05.235 12:45:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.235 12:45:38 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:05.235 [2024-11-20 12:45:38.216348] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x591cb0/0x5961a0) succeed. 00:18:05.235 [2024-11-20 12:45:38.230326] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x593200/0x5d7840) succeed. 00:18:05.497 12:45:38 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:05.497 12:45:38 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:05.497 12:45:38 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:05.758 12:45:38 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:05.758 12:45:38 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:06.019 12:45:38 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:06.019 12:45:39 -- target/nvmf_lvol.sh@29 -- # lvs=750c95d6-2403-4fee-9774-91ca922eec04 00:18:06.019 12:45:39 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 750c95d6-2403-4fee-9774-91ca922eec04 lvol 20 00:18:06.279 12:45:39 -- target/nvmf_lvol.sh@32 -- # lvol=dde0c979-c106-49bd-bdd1-5329c9d0a2bb 00:18:06.279 12:45:39 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:06.539 12:45:39 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dde0c979-c106-49bd-bdd1-5329c9d0a2bb 00:18:06.539 12:45:39 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:06.800 [2024-11-20 12:45:39.732415] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:06.800 12:45:39 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:07.061 12:45:39 -- target/nvmf_lvol.sh@42 -- # perf_pid=508566 00:18:07.061 12:45:39 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:07.061 12:45:39 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:07.061 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.003 12:45:40 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dde0c979-c106-49bd-bdd1-5329c9d0a2bb MY_SNAPSHOT 00:18:08.264 12:45:41 -- target/nvmf_lvol.sh@47 -- # snapshot=0694775a-a131-43a6-b49b-29bb898060c7 00:18:08.264 12:45:41 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dde0c979-c106-49bd-bdd1-5329c9d0a2bb 30 00:18:08.264 12:45:41 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0694775a-a131-43a6-b49b-29bb898060c7 MY_CLONE 00:18:08.525 12:45:41 -- target/nvmf_lvol.sh@49 -- # clone=ec0c36ff-a60b-4d4e-8fb9-64a7641130fe 00:18:08.525 12:45:41 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ec0c36ff-a60b-4d4e-8fb9-64a7641130fe 00:18:08.786 12:45:41 -- target/nvmf_lvol.sh@53 -- # wait 508566 00:18:18.790 Initializing NVMe Controllers 00:18:18.790 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:18.790 Controller IO queue size 128, less than required. 00:18:18.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:18.790 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:18.790 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:18.790 Initialization complete. Launching workers. 00:18:18.790 ======================================================== 00:18:18.790 Latency(us) 00:18:18.790 Device Information : IOPS MiB/s Average min max 00:18:18.790 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 23441.20 91.57 5461.22 2294.98 39190.41 00:18:18.790 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 23732.20 92.70 5394.06 1644.35 40065.12 00:18:18.790 ======================================================== 00:18:18.790 Total : 47173.40 184.27 5427.44 1644.35 40065.12 00:18:18.790 00:18:18.790 12:45:51 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:18.790 12:45:51 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dde0c979-c106-49bd-bdd1-5329c9d0a2bb 00:18:18.790 12:45:51 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 750c95d6-2403-4fee-9774-91ca922eec04 00:18:18.790 12:45:51 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:18.790 12:45:51 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:18.790 12:45:51 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:18.790 12:45:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:18.790 12:45:51 -- nvmf/common.sh@116 -- # sync 00:18:18.790 12:45:51 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:18.790 12:45:51 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:18.790 12:45:51 -- nvmf/common.sh@119 -- # set +e 00:18:18.790 12:45:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:18.790 12:45:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:18.790 rmmod nvme_rdma 00:18:18.790 rmmod nvme_fabrics 00:18:18.790 12:45:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:18.790 12:45:51 -- nvmf/common.sh@123 -- # set -e 00:18:18.790 12:45:51 -- nvmf/common.sh@124 -- # return 0 00:18:18.790 12:45:51 -- nvmf/common.sh@477 -- # '[' -n 508090 ']' 00:18:18.790 12:45:51 -- nvmf/common.sh@478 -- # killprocess 508090 00:18:18.790 12:45:51 -- common/autotest_common.sh@936 -- # '[' -z 508090 ']' 00:18:18.790 12:45:51 -- common/autotest_common.sh@940 -- # kill -0 508090 00:18:18.790 12:45:51 -- common/autotest_common.sh@941 -- # uname 00:18:18.790 12:45:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.790 12:45:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 508090 00:18:19.052 12:45:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:19.052 12:45:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:19.052 12:45:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 508090' 00:18:19.052 killing process with pid 508090 00:18:19.052 12:45:51 -- common/autotest_common.sh@955 -- # kill 508090 00:18:19.052 12:45:51 -- common/autotest_common.sh@960 -- # wait 508090 00:18:19.052 12:45:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:19.052 12:45:52 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:19.052 00:18:19.052 real 0m22.263s 00:18:19.052 user 1m10.998s 00:18:19.052 sys 0m6.131s 00:18:19.052 12:45:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:19.052 12:45:52 -- common/autotest_common.sh@10 -- # set +x 00:18:19.052 ************************************ 00:18:19.052 END TEST nvmf_lvol 00:18:19.052 ************************************ 00:18:19.314 12:45:52 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:19.314 12:45:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:19.314 12:45:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:19.314 12:45:52 -- common/autotest_common.sh@10 -- # set +x 00:18:19.314 ************************************ 00:18:19.314 START TEST nvmf_lvs_grow 00:18:19.314 ************************************ 00:18:19.314 12:45:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:19.314 * Looking for test storage... 00:18:19.314 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:19.314 12:45:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:19.314 12:45:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:19.314 12:45:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:19.314 12:45:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:19.314 12:45:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:19.314 12:45:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:19.314 12:45:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:19.314 12:45:52 -- scripts/common.sh@335 -- # IFS=.-: 00:18:19.314 12:45:52 -- scripts/common.sh@335 -- # read -ra ver1 00:18:19.314 12:45:52 -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.314 12:45:52 -- scripts/common.sh@336 -- # read -ra ver2 00:18:19.314 12:45:52 -- scripts/common.sh@337 -- # local 'op=<' 00:18:19.314 12:45:52 -- scripts/common.sh@339 -- # ver1_l=2 00:18:19.314 12:45:52 -- scripts/common.sh@340 -- # ver2_l=1 00:18:19.314 12:45:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:19.314 12:45:52 -- scripts/common.sh@343 -- # case "$op" in 00:18:19.314 12:45:52 -- scripts/common.sh@344 -- # : 1 00:18:19.314 12:45:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:19.314 12:45:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.314 12:45:52 -- scripts/common.sh@364 -- # decimal 1 00:18:19.314 12:45:52 -- scripts/common.sh@352 -- # local d=1 00:18:19.314 12:45:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.314 12:45:52 -- scripts/common.sh@354 -- # echo 1 00:18:19.314 12:45:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:19.314 12:45:52 -- scripts/common.sh@365 -- # decimal 2 00:18:19.314 12:45:52 -- scripts/common.sh@352 -- # local d=2 00:18:19.314 12:45:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.314 12:45:52 -- scripts/common.sh@354 -- # echo 2 00:18:19.314 12:45:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:19.314 12:45:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:19.314 12:45:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:19.314 12:45:52 -- scripts/common.sh@367 -- # return 0 00:18:19.314 12:45:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.315 12:45:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:19.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.315 --rc genhtml_branch_coverage=1 00:18:19.315 --rc genhtml_function_coverage=1 00:18:19.315 --rc genhtml_legend=1 00:18:19.315 --rc geninfo_all_blocks=1 00:18:19.315 --rc geninfo_unexecuted_blocks=1 00:18:19.315 00:18:19.315 ' 00:18:19.315 12:45:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:19.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.315 --rc genhtml_branch_coverage=1 00:18:19.315 --rc genhtml_function_coverage=1 00:18:19.315 --rc genhtml_legend=1 00:18:19.315 --rc geninfo_all_blocks=1 00:18:19.315 --rc geninfo_unexecuted_blocks=1 00:18:19.315 00:18:19.315 ' 00:18:19.315 12:45:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:19.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.315 --rc genhtml_branch_coverage=1 00:18:19.315 --rc genhtml_function_coverage=1 00:18:19.315 --rc genhtml_legend=1 00:18:19.315 --rc geninfo_all_blocks=1 00:18:19.315 --rc geninfo_unexecuted_blocks=1 00:18:19.315 00:18:19.315 ' 00:18:19.315 12:45:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:19.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.315 --rc genhtml_branch_coverage=1 00:18:19.315 --rc genhtml_function_coverage=1 00:18:19.315 --rc genhtml_legend=1 00:18:19.315 --rc geninfo_all_blocks=1 00:18:19.315 --rc geninfo_unexecuted_blocks=1 00:18:19.315 00:18:19.315 ' 00:18:19.315 12:45:52 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.315 12:45:52 -- nvmf/common.sh@7 -- # uname -s 00:18:19.315 12:45:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.315 12:45:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.315 12:45:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.315 12:45:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.315 12:45:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.315 12:45:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.315 12:45:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.315 12:45:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.315 12:45:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.315 12:45:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.315 12:45:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:19.315 12:45:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:19.315 12:45:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.315 12:45:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.315 12:45:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.315 12:45:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:19.315 12:45:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.315 12:45:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.315 12:45:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.315 12:45:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.315 12:45:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.315 12:45:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.315 12:45:52 -- paths/export.sh@5 -- # export PATH 00:18:19.315 12:45:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.315 12:45:52 -- nvmf/common.sh@46 -- # : 0 00:18:19.315 12:45:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:19.315 12:45:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:19.315 12:45:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:19.315 12:45:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.315 12:45:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.315 12:45:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:19.315 12:45:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:19.315 12:45:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:19.315 12:45:52 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:19.315 12:45:52 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.315 12:45:52 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:19.315 12:45:52 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:19.315 12:45:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.315 12:45:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:19.315 12:45:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:19.315 12:45:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:19.315 12:45:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.315 12:45:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.315 12:45:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.315 12:45:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:19.315 12:45:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:19.315 12:45:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:19.315 12:45:52 -- common/autotest_common.sh@10 -- # set +x 00:18:27.484 12:45:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:27.484 12:45:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:27.484 12:45:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:27.484 12:45:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:27.484 12:45:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:27.484 12:45:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:27.484 12:45:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:27.484 12:45:59 -- nvmf/common.sh@294 -- # net_devs=() 00:18:27.484 12:45:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:27.484 12:45:59 -- nvmf/common.sh@295 -- # e810=() 00:18:27.484 12:45:59 -- nvmf/common.sh@295 -- # local -ga e810 00:18:27.484 12:45:59 -- nvmf/common.sh@296 -- # x722=() 00:18:27.484 12:45:59 -- nvmf/common.sh@296 -- # local -ga x722 00:18:27.484 12:45:59 -- nvmf/common.sh@297 -- # mlx=() 00:18:27.484 12:45:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:27.484 12:45:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.484 12:45:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.484 12:45:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.484 12:45:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.484 12:45:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.484 12:45:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.484 12:45:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.484 12:45:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.485 12:45:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.485 12:45:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.485 12:45:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.485 12:45:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:27.485 12:45:59 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:27.485 12:45:59 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:27.485 12:45:59 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:27.485 12:45:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:27.485 12:45:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:18:27.485 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:18:27.485 12:45:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:27.485 12:45:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:18:27.485 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:18:27.485 12:45:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:27.485 12:45:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:27.485 12:45:59 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.485 12:45:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.485 12:45:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.485 12:45:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:18:27.485 Found net devices under 0000:98:00.0: mlx_0_0 00:18:27.485 12:45:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.485 12:45:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.485 12:45:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.485 12:45:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.485 12:45:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:18:27.485 Found net devices under 0000:98:00.1: mlx_0_1 00:18:27.485 12:45:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.485 12:45:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:27.485 12:45:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:27.485 12:45:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:27.485 12:45:59 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:27.485 12:45:59 -- nvmf/common.sh@57 -- # uname 00:18:27.485 12:45:59 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:27.485 12:45:59 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:27.485 12:45:59 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:27.485 12:45:59 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:27.485 12:45:59 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:27.485 12:45:59 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:27.485 12:45:59 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:27.485 12:45:59 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:27.485 12:45:59 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:27.485 12:45:59 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:27.485 12:45:59 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:27.485 12:45:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:27.485 12:45:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:27.485 12:45:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:27.485 12:45:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:27.485 12:45:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:27.485 12:45:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:27.485 12:45:59 -- nvmf/common.sh@104 -- # continue 2 00:18:27.485 12:45:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:27.485 12:45:59 -- nvmf/common.sh@104 -- # continue 2 00:18:27.485 12:45:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:27.485 12:45:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:27.485 12:45:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:27.485 12:45:59 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:27.485 12:45:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:27.485 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:27.485 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:18:27.485 altname enp152s0f0np0 00:18:27.485 altname ens817f0np0 00:18:27.485 inet 192.168.100.8/24 scope global mlx_0_0 00:18:27.485 valid_lft forever preferred_lft forever 00:18:27.485 12:45:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:27.485 12:45:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:27.485 12:45:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:27.485 12:45:59 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:27.485 12:45:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:27.485 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:27.485 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:18:27.485 altname enp152s0f1np1 00:18:27.485 altname ens817f1np1 00:18:27.485 inet 192.168.100.9/24 scope global mlx_0_1 00:18:27.485 valid_lft forever preferred_lft forever 00:18:27.485 12:45:59 -- nvmf/common.sh@410 -- # return 0 00:18:27.485 12:45:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:27.485 12:45:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:27.485 12:45:59 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:27.485 12:45:59 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:27.485 12:45:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:27.485 12:45:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:27.485 12:45:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:27.485 12:45:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:27.485 12:45:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:27.485 12:45:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:27.485 12:45:59 -- nvmf/common.sh@104 -- # continue 2 00:18:27.485 12:45:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.485 12:45:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:27.485 12:45:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:27.485 12:45:59 -- nvmf/common.sh@104 -- # continue 2 00:18:27.485 12:45:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:27.485 12:45:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:27.485 12:45:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:27.485 12:45:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:27.485 12:45:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:27.485 12:45:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:27.485 12:45:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:27.485 12:45:59 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:27.485 192.168.100.9' 00:18:27.485 12:45:59 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:27.485 192.168.100.9' 00:18:27.485 12:45:59 -- nvmf/common.sh@445 -- # head -n 1 00:18:27.485 12:45:59 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:27.485 12:45:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:27.485 192.168.100.9' 00:18:27.485 12:45:59 -- nvmf/common.sh@446 -- # tail -n +2 00:18:27.485 12:45:59 -- nvmf/common.sh@446 -- # head -n 1 00:18:27.485 12:45:59 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:27.485 12:45:59 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:27.485 12:45:59 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:27.485 12:45:59 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:27.485 12:45:59 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:27.485 12:45:59 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:27.485 12:45:59 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:27.485 12:45:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:27.485 12:45:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.485 12:45:59 -- common/autotest_common.sh@10 -- # set +x 00:18:27.485 12:45:59 -- nvmf/common.sh@469 -- # nvmfpid=514770 00:18:27.485 12:45:59 -- nvmf/common.sh@470 -- # waitforlisten 514770 00:18:27.485 12:45:59 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:27.485 12:45:59 -- common/autotest_common.sh@829 -- # '[' -z 514770 ']' 00:18:27.485 12:45:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.486 12:45:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.486 12:45:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.486 12:45:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.486 12:45:59 -- common/autotest_common.sh@10 -- # set +x 00:18:27.486 [2024-11-20 12:45:59.568812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:27.486 [2024-11-20 12:45:59.568878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.486 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.486 [2024-11-20 12:45:59.633563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.486 [2024-11-20 12:45:59.705656] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:27.486 [2024-11-20 12:45:59.705778] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.486 [2024-11-20 12:45:59.705787] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.486 [2024-11-20 12:45:59.705795] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.486 [2024-11-20 12:45:59.705813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.486 12:46:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.486 12:46:00 -- common/autotest_common.sh@862 -- # return 0 00:18:27.486 12:46:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:27.486 12:46:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:27.486 12:46:00 -- common/autotest_common.sh@10 -- # set +x 00:18:27.486 12:46:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.486 12:46:00 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:27.486 [2024-11-20 12:46:00.558447] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1814650/0x1818b40) succeed. 00:18:27.486 [2024-11-20 12:46:00.571523] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1815b50/0x185a1e0) succeed. 00:18:27.746 12:46:00 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:27.746 12:46:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:27.746 12:46:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:27.746 12:46:00 -- common/autotest_common.sh@10 -- # set +x 00:18:27.746 ************************************ 00:18:27.746 START TEST lvs_grow_clean 00:18:27.746 ************************************ 00:18:27.746 12:46:00 -- common/autotest_common.sh@1114 -- # lvs_grow 00:18:27.746 12:46:00 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:27.746 12:46:00 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:27.746 12:46:00 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:27.746 12:46:00 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:27.746 12:46:00 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:27.746 12:46:00 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:27.746 12:46:00 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:27.747 12:46:00 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:27.747 12:46:00 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:28.007 12:46:00 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:28.007 12:46:00 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:28.007 12:46:01 -- target/nvmf_lvs_grow.sh@28 -- # lvs=52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:28.007 12:46:01 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:28.007 12:46:01 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:28.268 12:46:01 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:28.268 12:46:01 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:28.268 12:46:01 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 lvol 150 00:18:28.268 12:46:01 -- target/nvmf_lvs_grow.sh@33 -- # lvol=619275a0-e8f1-4bc1-9ce1-0f14b75aa9e9 00:18:28.268 12:46:01 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:28.268 12:46:01 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:28.529 [2024-11-20 12:46:01.498154] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:28.529 [2024-11-20 12:46:01.498204] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:28.529 true 00:18:28.529 12:46:01 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:28.529 12:46:01 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:28.789 12:46:01 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:28.789 12:46:01 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:28.789 12:46:01 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 619275a0-e8f1-4bc1-9ce1-0f14b75aa9e9 00:18:29.049 12:46:01 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:29.049 [2024-11-20 12:46:02.104237] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:29.049 12:46:02 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:29.310 12:46:02 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=515265 00:18:29.310 12:46:02 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:29.310 12:46:02 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:29.310 12:46:02 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 515265 /var/tmp/bdevperf.sock 00:18:29.310 12:46:02 -- common/autotest_common.sh@829 -- # '[' -z 515265 ']' 00:18:29.310 12:46:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.310 12:46:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.310 12:46:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.310 12:46:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.310 12:46:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.310 [2024-11-20 12:46:02.327575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:29.310 [2024-11-20 12:46:02.327632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515265 ] 00:18:29.310 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.310 [2024-11-20 12:46:02.406969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.570 [2024-11-20 12:46:02.469532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.143 12:46:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.143 12:46:03 -- common/autotest_common.sh@862 -- # return 0 00:18:30.143 12:46:03 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:30.404 Nvme0n1 00:18:30.404 12:46:03 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:30.404 [ 00:18:30.404 { 00:18:30.404 "name": "Nvme0n1", 00:18:30.404 "aliases": [ 00:18:30.404 "619275a0-e8f1-4bc1-9ce1-0f14b75aa9e9" 00:18:30.404 ], 00:18:30.404 "product_name": "NVMe disk", 00:18:30.404 "block_size": 4096, 00:18:30.404 "num_blocks": 38912, 00:18:30.404 "uuid": "619275a0-e8f1-4bc1-9ce1-0f14b75aa9e9", 00:18:30.404 "assigned_rate_limits": { 00:18:30.404 "rw_ios_per_sec": 0, 00:18:30.404 "rw_mbytes_per_sec": 0, 00:18:30.404 "r_mbytes_per_sec": 0, 00:18:30.404 "w_mbytes_per_sec": 0 00:18:30.404 }, 00:18:30.404 "claimed": false, 00:18:30.404 "zoned": false, 00:18:30.404 "supported_io_types": { 00:18:30.404 "read": true, 00:18:30.404 "write": true, 00:18:30.404 "unmap": true, 00:18:30.404 "write_zeroes": true, 00:18:30.404 "flush": true, 00:18:30.404 "reset": true, 00:18:30.404 "compare": true, 00:18:30.404 "compare_and_write": true, 00:18:30.404 "abort": true, 00:18:30.404 "nvme_admin": true, 00:18:30.404 "nvme_io": true 00:18:30.404 }, 00:18:30.404 "memory_domains": [ 00:18:30.404 { 00:18:30.404 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:30.404 "dma_device_type": 0 00:18:30.404 } 00:18:30.404 ], 00:18:30.404 "driver_specific": { 00:18:30.404 "nvme": [ 00:18:30.404 { 00:18:30.404 "trid": { 00:18:30.404 "trtype": "RDMA", 00:18:30.404 "adrfam": "IPv4", 00:18:30.404 "traddr": "192.168.100.8", 00:18:30.404 "trsvcid": "4420", 00:18:30.404 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:30.404 }, 00:18:30.404 "ctrlr_data": { 00:18:30.404 "cntlid": 1, 00:18:30.404 "vendor_id": "0x8086", 00:18:30.404 "model_number": "SPDK bdev Controller", 00:18:30.404 "serial_number": "SPDK0", 00:18:30.404 "firmware_revision": "24.01.1", 00:18:30.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.404 "oacs": { 00:18:30.404 "security": 0, 00:18:30.404 "format": 0, 00:18:30.404 "firmware": 0, 00:18:30.404 "ns_manage": 0 00:18:30.404 }, 00:18:30.404 "multi_ctrlr": true, 00:18:30.404 "ana_reporting": false 00:18:30.404 }, 00:18:30.404 "vs": { 00:18:30.404 "nvme_version": "1.3" 00:18:30.404 }, 00:18:30.404 "ns_data": { 00:18:30.404 "id": 1, 00:18:30.404 "can_share": true 00:18:30.404 } 00:18:30.404 } 00:18:30.404 ], 00:18:30.404 "mp_policy": "active_passive" 00:18:30.404 } 00:18:30.404 } 00:18:30.404 ] 00:18:30.404 12:46:03 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=515602 00:18:30.404 12:46:03 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:30.404 12:46:03 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.665 Running I/O for 10 seconds... 00:18:31.608 Latency(us) 00:18:31.608 [2024-11-20T11:46:04.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.608 [2024-11-20T11:46:04.716Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.608 Nvme0n1 : 1.00 26850.00 104.88 0.00 0.00 0.00 0.00 0.00 00:18:31.608 [2024-11-20T11:46:04.716Z] =================================================================================================================== 00:18:31.608 [2024-11-20T11:46:04.716Z] Total : 26850.00 104.88 0.00 0.00 0.00 0.00 0.00 00:18:31.608 00:18:32.548 12:46:05 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:32.548 [2024-11-20T11:46:05.656Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.548 Nvme0n1 : 2.00 27185.00 106.19 0.00 0.00 0.00 0.00 0.00 00:18:32.548 [2024-11-20T11:46:05.656Z] =================================================================================================================== 00:18:32.548 [2024-11-20T11:46:05.656Z] Total : 27185.00 106.19 0.00 0.00 0.00 0.00 0.00 00:18:32.548 00:18:32.548 true 00:18:32.809 12:46:05 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:32.809 12:46:05 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:32.809 12:46:05 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:32.809 12:46:05 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:32.809 12:46:05 -- target/nvmf_lvs_grow.sh@65 -- # wait 515602 00:18:33.751 [2024-11-20T11:46:06.859Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.751 Nvme0n1 : 3.00 27307.33 106.67 0.00 0.00 0.00 0.00 0.00 00:18:33.751 [2024-11-20T11:46:06.859Z] =================================================================================================================== 00:18:33.751 [2024-11-20T11:46:06.859Z] Total : 27307.33 106.67 0.00 0.00 0.00 0.00 0.00 00:18:33.751 00:18:34.693 [2024-11-20T11:46:07.801Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.693 Nvme0n1 : 4.00 27400.50 107.03 0.00 0.00 0.00 0.00 0.00 00:18:34.693 [2024-11-20T11:46:07.801Z] =================================================================================================================== 00:18:34.693 [2024-11-20T11:46:07.801Z] Total : 27400.50 107.03 0.00 0.00 0.00 0.00 0.00 00:18:34.693 00:18:35.635 [2024-11-20T11:46:08.743Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.635 Nvme0n1 : 5.00 27462.20 107.27 0.00 0.00 0.00 0.00 0.00 00:18:35.635 [2024-11-20T11:46:08.743Z] =================================================================================================================== 00:18:35.635 [2024-11-20T11:46:08.743Z] Total : 27462.20 107.27 0.00 0.00 0.00 0.00 0.00 00:18:35.635 00:18:36.577 [2024-11-20T11:46:09.685Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.577 Nvme0n1 : 6.00 27509.17 107.46 0.00 0.00 0.00 0.00 0.00 00:18:36.577 [2024-11-20T11:46:09.685Z] =================================================================================================================== 00:18:36.577 [2024-11-20T11:46:09.685Z] Total : 27509.17 107.46 0.00 0.00 0.00 0.00 0.00 00:18:36.577 00:18:37.521 [2024-11-20T11:46:10.629Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.521 Nvme0n1 : 7.00 27538.00 107.57 0.00 0.00 0.00 0.00 0.00 00:18:37.521 [2024-11-20T11:46:10.629Z] =================================================================================================================== 00:18:37.521 [2024-11-20T11:46:10.629Z] Total : 27538.00 107.57 0.00 0.00 0.00 0.00 0.00 00:18:37.521 00:18:38.906 [2024-11-20T11:46:12.014Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.906 Nvme0n1 : 8.00 27563.75 107.67 0.00 0.00 0.00 0.00 0.00 00:18:38.906 [2024-11-20T11:46:12.014Z] =================================================================================================================== 00:18:38.906 [2024-11-20T11:46:12.014Z] Total : 27563.75 107.67 0.00 0.00 0.00 0.00 0.00 00:18:38.906 00:18:39.849 [2024-11-20T11:46:12.957Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.849 Nvme0n1 : 9.00 27584.11 107.75 0.00 0.00 0.00 0.00 0.00 00:18:39.849 [2024-11-20T11:46:12.957Z] =================================================================================================================== 00:18:39.849 [2024-11-20T11:46:12.957Z] Total : 27584.11 107.75 0.00 0.00 0.00 0.00 0.00 00:18:39.849 00:18:40.792 [2024-11-20T11:46:13.900Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.792 Nvme0n1 : 10.00 27600.00 107.81 0.00 0.00 0.00 0.00 0.00 00:18:40.792 [2024-11-20T11:46:13.900Z] =================================================================================================================== 00:18:40.792 [2024-11-20T11:46:13.900Z] Total : 27600.00 107.81 0.00 0.00 0.00 0.00 0.00 00:18:40.792 00:18:40.792 00:18:40.792 Latency(us) 00:18:40.792 [2024-11-20T11:46:13.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.792 [2024-11-20T11:46:13.900Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.792 Nvme0n1 : 10.00 27601.45 107.82 0.00 0.00 4633.86 3440.64 19879.25 00:18:40.792 [2024-11-20T11:46:13.900Z] =================================================================================================================== 00:18:40.792 [2024-11-20T11:46:13.900Z] Total : 27601.45 107.82 0.00 0.00 4633.86 3440.64 19879.25 00:18:40.792 0 00:18:40.792 12:46:13 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 515265 00:18:40.792 12:46:13 -- common/autotest_common.sh@936 -- # '[' -z 515265 ']' 00:18:40.792 12:46:13 -- common/autotest_common.sh@940 -- # kill -0 515265 00:18:40.792 12:46:13 -- common/autotest_common.sh@941 -- # uname 00:18:40.792 12:46:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.792 12:46:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 515265 00:18:40.792 12:46:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:40.792 12:46:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:40.792 12:46:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 515265' 00:18:40.793 killing process with pid 515265 00:18:40.793 12:46:13 -- common/autotest_common.sh@955 -- # kill 515265 00:18:40.793 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.793 00:18:40.793 Latency(us) 00:18:40.793 [2024-11-20T11:46:13.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.793 [2024-11-20T11:46:13.901Z] =================================================================================================================== 00:18:40.793 [2024-11-20T11:46:13.901Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.793 12:46:13 -- common/autotest_common.sh@960 -- # wait 515265 00:18:40.793 12:46:13 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:41.054 12:46:14 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:41.054 12:46:14 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:41.315 12:46:14 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:41.315 12:46:14 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:41.315 12:46:14 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:41.315 [2024-11-20 12:46:14.322771] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:41.315 12:46:14 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:41.315 12:46:14 -- common/autotest_common.sh@650 -- # local es=0 00:18:41.315 12:46:14 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:41.315 12:46:14 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:41.315 12:46:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.315 12:46:14 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:41.315 12:46:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.315 12:46:14 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:41.315 12:46:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.315 12:46:14 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:41.315 12:46:14 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:41.315 12:46:14 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:41.576 request: 00:18:41.576 { 00:18:41.576 "uuid": "52b9cc62-69dd-405a-b7fb-3a61154cfa26", 00:18:41.576 "method": "bdev_lvol_get_lvstores", 00:18:41.576 "req_id": 1 00:18:41.576 } 00:18:41.576 Got JSON-RPC error response 00:18:41.576 response: 00:18:41.576 { 00:18:41.576 "code": -19, 00:18:41.576 "message": "No such device" 00:18:41.576 } 00:18:41.576 12:46:14 -- common/autotest_common.sh@653 -- # es=1 00:18:41.576 12:46:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:41.576 12:46:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:41.576 12:46:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:41.576 12:46:14 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:41.576 aio_bdev 00:18:41.576 12:46:14 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 619275a0-e8f1-4bc1-9ce1-0f14b75aa9e9 00:18:41.576 12:46:14 -- common/autotest_common.sh@897 -- # local bdev_name=619275a0-e8f1-4bc1-9ce1-0f14b75aa9e9 00:18:41.576 12:46:14 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:41.576 12:46:14 -- common/autotest_common.sh@899 -- # local i 00:18:41.576 12:46:14 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:41.576 12:46:14 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:41.576 12:46:14 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:41.838 12:46:14 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 619275a0-e8f1-4bc1-9ce1-0f14b75aa9e9 -t 2000 00:18:42.099 [ 00:18:42.099 { 00:18:42.099 "name": "619275a0-e8f1-4bc1-9ce1-0f14b75aa9e9", 00:18:42.099 "aliases": [ 00:18:42.099 "lvs/lvol" 00:18:42.099 ], 00:18:42.099 "product_name": "Logical Volume", 00:18:42.099 "block_size": 4096, 00:18:42.099 "num_blocks": 38912, 00:18:42.099 "uuid": "619275a0-e8f1-4bc1-9ce1-0f14b75aa9e9", 00:18:42.099 "assigned_rate_limits": { 00:18:42.099 "rw_ios_per_sec": 0, 00:18:42.099 "rw_mbytes_per_sec": 0, 00:18:42.099 "r_mbytes_per_sec": 0, 00:18:42.099 "w_mbytes_per_sec": 0 00:18:42.099 }, 00:18:42.099 "claimed": false, 00:18:42.099 "zoned": false, 00:18:42.099 "supported_io_types": { 00:18:42.099 "read": true, 00:18:42.099 "write": true, 00:18:42.099 "unmap": true, 00:18:42.099 "write_zeroes": true, 00:18:42.099 "flush": false, 00:18:42.099 "reset": true, 00:18:42.099 "compare": false, 00:18:42.099 "compare_and_write": false, 00:18:42.099 "abort": false, 00:18:42.099 "nvme_admin": false, 00:18:42.099 "nvme_io": false 00:18:42.099 }, 00:18:42.099 "driver_specific": { 00:18:42.099 "lvol": { 00:18:42.099 "lvol_store_uuid": "52b9cc62-69dd-405a-b7fb-3a61154cfa26", 00:18:42.099 "base_bdev": "aio_bdev", 00:18:42.099 "thin_provision": false, 00:18:42.099 "snapshot": false, 00:18:42.099 "clone": false, 00:18:42.099 "esnap_clone": false 00:18:42.099 } 00:18:42.099 } 00:18:42.099 } 00:18:42.099 ] 00:18:42.099 12:46:14 -- common/autotest_common.sh@905 -- # return 0 00:18:42.099 12:46:14 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:42.099 12:46:14 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:42.099 12:46:15 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:42.099 12:46:15 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:42.099 12:46:15 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:42.359 12:46:15 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:42.360 12:46:15 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 619275a0-e8f1-4bc1-9ce1-0f14b75aa9e9 00:18:42.360 12:46:15 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52b9cc62-69dd-405a-b7fb-3a61154cfa26 00:18:42.620 12:46:15 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:42.880 00:18:42.880 real 0m15.175s 00:18:42.880 user 0m15.212s 00:18:42.880 sys 0m0.922s 00:18:42.880 12:46:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:42.880 12:46:15 -- common/autotest_common.sh@10 -- # set +x 00:18:42.880 ************************************ 00:18:42.880 END TEST lvs_grow_clean 00:18:42.880 ************************************ 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:42.880 12:46:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:42.880 12:46:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.880 12:46:15 -- common/autotest_common.sh@10 -- # set +x 00:18:42.880 ************************************ 00:18:42.880 START TEST lvs_grow_dirty 00:18:42.880 ************************************ 00:18:42.880 12:46:15 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:42.880 12:46:15 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:43.141 12:46:16 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:43.141 12:46:16 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:43.141 12:46:16 -- target/nvmf_lvs_grow.sh@28 -- # lvs=2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:43.141 12:46:16 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:43.141 12:46:16 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:43.401 12:46:16 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:43.402 12:46:16 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:43.402 12:46:16 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 lvol 150 00:18:43.663 12:46:16 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ad757f72-f7d2-4e15-b753-bb38a533d7cb 00:18:43.663 12:46:16 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:43.663 12:46:16 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:43.663 [2024-11-20 12:46:16.677219] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:43.663 [2024-11-20 12:46:16.677269] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:43.663 true 00:18:43.663 12:46:16 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:43.663 12:46:16 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:43.925 12:46:16 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:43.925 12:46:16 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:43.925 12:46:16 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ad757f72-f7d2-4e15-b753-bb38a533d7cb 00:18:44.187 12:46:17 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:44.448 12:46:17 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:44.448 12:46:17 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:44.448 12:46:17 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=518375 00:18:44.448 12:46:17 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:44.448 12:46:17 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 518375 /var/tmp/bdevperf.sock 00:18:44.448 12:46:17 -- common/autotest_common.sh@829 -- # '[' -z 518375 ']' 00:18:44.448 12:46:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.448 12:46:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.448 12:46:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.448 12:46:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.448 12:46:17 -- common/autotest_common.sh@10 -- # set +x 00:18:44.448 [2024-11-20 12:46:17.472095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:44.448 [2024-11-20 12:46:17.472143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518375 ] 00:18:44.448 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.448 [2024-11-20 12:46:17.551217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.709 [2024-11-20 12:46:17.612967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.281 12:46:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.281 12:46:18 -- common/autotest_common.sh@862 -- # return 0 00:18:45.281 12:46:18 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:45.542 Nvme0n1 00:18:45.542 12:46:18 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:45.802 [ 00:18:45.802 { 00:18:45.802 "name": "Nvme0n1", 00:18:45.802 "aliases": [ 00:18:45.802 "ad757f72-f7d2-4e15-b753-bb38a533d7cb" 00:18:45.802 ], 00:18:45.802 "product_name": "NVMe disk", 00:18:45.802 "block_size": 4096, 00:18:45.802 "num_blocks": 38912, 00:18:45.802 "uuid": "ad757f72-f7d2-4e15-b753-bb38a533d7cb", 00:18:45.802 "assigned_rate_limits": { 00:18:45.802 "rw_ios_per_sec": 0, 00:18:45.802 "rw_mbytes_per_sec": 0, 00:18:45.802 "r_mbytes_per_sec": 0, 00:18:45.802 "w_mbytes_per_sec": 0 00:18:45.802 }, 00:18:45.802 "claimed": false, 00:18:45.802 "zoned": false, 00:18:45.802 "supported_io_types": { 00:18:45.802 "read": true, 00:18:45.802 "write": true, 00:18:45.802 "unmap": true, 00:18:45.802 "write_zeroes": true, 00:18:45.802 "flush": true, 00:18:45.802 "reset": true, 00:18:45.802 "compare": true, 00:18:45.802 "compare_and_write": true, 00:18:45.802 "abort": true, 00:18:45.802 "nvme_admin": true, 00:18:45.802 "nvme_io": true 00:18:45.802 }, 00:18:45.802 "memory_domains": [ 00:18:45.802 { 00:18:45.802 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:45.802 "dma_device_type": 0 00:18:45.802 } 00:18:45.802 ], 00:18:45.802 "driver_specific": { 00:18:45.802 "nvme": [ 00:18:45.802 { 00:18:45.802 "trid": { 00:18:45.802 "trtype": "RDMA", 00:18:45.802 "adrfam": "IPv4", 00:18:45.802 "traddr": "192.168.100.8", 00:18:45.802 "trsvcid": "4420", 00:18:45.802 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:45.802 }, 00:18:45.802 "ctrlr_data": { 00:18:45.802 "cntlid": 1, 00:18:45.802 "vendor_id": "0x8086", 00:18:45.802 "model_number": "SPDK bdev Controller", 00:18:45.802 "serial_number": "SPDK0", 00:18:45.802 "firmware_revision": "24.01.1", 00:18:45.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:45.802 "oacs": { 00:18:45.803 "security": 0, 00:18:45.803 "format": 0, 00:18:45.803 "firmware": 0, 00:18:45.803 "ns_manage": 0 00:18:45.803 }, 00:18:45.803 "multi_ctrlr": true, 00:18:45.803 "ana_reporting": false 00:18:45.803 }, 00:18:45.803 "vs": { 00:18:45.803 "nvme_version": "1.3" 00:18:45.803 }, 00:18:45.803 "ns_data": { 00:18:45.803 "id": 1, 00:18:45.803 "can_share": true 00:18:45.803 } 00:18:45.803 } 00:18:45.803 ], 00:18:45.803 "mp_policy": "active_passive" 00:18:45.803 } 00:18:45.803 } 00:18:45.803 ] 00:18:45.803 12:46:18 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=518682 00:18:45.803 12:46:18 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:45.803 12:46:18 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.803 Running I/O for 10 seconds... 00:18:46.744 Latency(us) 00:18:46.744 [2024-11-20T11:46:19.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.744 [2024-11-20T11:46:19.852Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.744 Nvme0n1 : 1.00 27104.00 105.88 0.00 0.00 0.00 0.00 0.00 00:18:46.744 [2024-11-20T11:46:19.852Z] =================================================================================================================== 00:18:46.744 [2024-11-20T11:46:19.852Z] Total : 27104.00 105.88 0.00 0.00 0.00 0.00 0.00 00:18:46.744 00:18:47.687 12:46:20 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:47.687 [2024-11-20T11:46:20.795Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:47.687 Nvme0n1 : 2.00 27262.50 106.49 0.00 0.00 0.00 0.00 0.00 00:18:47.687 [2024-11-20T11:46:20.795Z] =================================================================================================================== 00:18:47.687 [2024-11-20T11:46:20.795Z] Total : 27262.50 106.49 0.00 0.00 0.00 0.00 0.00 00:18:47.687 00:18:47.948 true 00:18:47.948 12:46:20 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:47.948 12:46:20 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:47.948 12:46:21 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:47.948 12:46:21 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:47.948 12:46:21 -- target/nvmf_lvs_grow.sh@65 -- # wait 518682 00:18:48.890 [2024-11-20T11:46:21.998Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.890 Nvme0n1 : 3.00 27369.67 106.91 0.00 0.00 0.00 0.00 0.00 00:18:48.890 [2024-11-20T11:46:21.998Z] =================================================================================================================== 00:18:48.890 [2024-11-20T11:46:21.998Z] Total : 27369.67 106.91 0.00 0.00 0.00 0.00 0.00 00:18:48.890 00:18:49.832 [2024-11-20T11:46:22.940Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.832 Nvme0n1 : 4.00 27441.00 107.19 0.00 0.00 0.00 0.00 0.00 00:18:49.832 [2024-11-20T11:46:22.940Z] =================================================================================================================== 00:18:49.832 [2024-11-20T11:46:22.940Z] Total : 27441.00 107.19 0.00 0.00 0.00 0.00 0.00 00:18:49.832 00:18:50.774 [2024-11-20T11:46:23.882Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:50.774 Nvme0n1 : 5.00 27494.60 107.40 0.00 0.00 0.00 0.00 0.00 00:18:50.774 [2024-11-20T11:46:23.882Z] =================================================================================================================== 00:18:50.774 [2024-11-20T11:46:23.882Z] Total : 27494.60 107.40 0.00 0.00 0.00 0.00 0.00 00:18:50.774 00:18:51.718 [2024-11-20T11:46:24.826Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.718 Nvme0n1 : 6.00 27535.50 107.56 0.00 0.00 0.00 0.00 0.00 00:18:51.718 [2024-11-20T11:46:24.826Z] =================================================================================================================== 00:18:51.718 [2024-11-20T11:46:24.826Z] Total : 27535.50 107.56 0.00 0.00 0.00 0.00 0.00 00:18:51.718 00:18:53.102 [2024-11-20T11:46:26.210Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.102 Nvme0n1 : 7.00 27561.14 107.66 0.00 0.00 0.00 0.00 0.00 00:18:53.102 [2024-11-20T11:46:26.210Z] =================================================================================================================== 00:18:53.102 [2024-11-20T11:46:26.210Z] Total : 27561.14 107.66 0.00 0.00 0.00 0.00 0.00 00:18:53.102 00:18:54.045 [2024-11-20T11:46:27.153Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.045 Nvme0n1 : 8.00 27587.75 107.76 0.00 0.00 0.00 0.00 0.00 00:18:54.045 [2024-11-20T11:46:27.153Z] =================================================================================================================== 00:18:54.045 [2024-11-20T11:46:27.153Z] Total : 27587.75 107.76 0.00 0.00 0.00 0.00 0.00 00:18:54.045 00:18:54.988 [2024-11-20T11:46:28.096Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.988 Nvme0n1 : 9.00 27604.89 107.83 0.00 0.00 0.00 0.00 0.00 00:18:54.988 [2024-11-20T11:46:28.096Z] =================================================================================================================== 00:18:54.988 [2024-11-20T11:46:28.096Z] Total : 27604.89 107.83 0.00 0.00 0.00 0.00 0.00 00:18:54.988 00:18:55.932 [2024-11-20T11:46:29.040Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.932 Nvme0n1 : 10.00 27618.90 107.89 0.00 0.00 0.00 0.00 0.00 00:18:55.932 [2024-11-20T11:46:29.040Z] =================================================================================================================== 00:18:55.932 [2024-11-20T11:46:29.040Z] Total : 27618.90 107.89 0.00 0.00 0.00 0.00 0.00 00:18:55.932 00:18:55.932 00:18:55.932 Latency(us) 00:18:55.932 [2024-11-20T11:46:29.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.932 [2024-11-20T11:46:29.040Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.932 Nvme0n1 : 10.00 27618.03 107.88 0.00 0.00 4631.65 3140.27 12233.39 00:18:55.932 [2024-11-20T11:46:29.040Z] =================================================================================================================== 00:18:55.932 [2024-11-20T11:46:29.040Z] Total : 27618.03 107.88 0.00 0.00 4631.65 3140.27 12233.39 00:18:55.932 0 00:18:55.932 12:46:28 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 518375 00:18:55.932 12:46:28 -- common/autotest_common.sh@936 -- # '[' -z 518375 ']' 00:18:55.932 12:46:28 -- common/autotest_common.sh@940 -- # kill -0 518375 00:18:55.932 12:46:28 -- common/autotest_common.sh@941 -- # uname 00:18:55.932 12:46:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:55.932 12:46:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 518375 00:18:55.932 12:46:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:55.932 12:46:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:55.932 12:46:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 518375' 00:18:55.932 killing process with pid 518375 00:18:55.932 12:46:28 -- common/autotest_common.sh@955 -- # kill 518375 00:18:55.932 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.932 00:18:55.932 Latency(us) 00:18:55.932 [2024-11-20T11:46:29.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.932 [2024-11-20T11:46:29.040Z] =================================================================================================================== 00:18:55.932 [2024-11-20T11:46:29.040Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.932 12:46:28 -- common/autotest_common.sh@960 -- # wait 518375 00:18:55.932 12:46:29 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:56.193 12:46:29 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:56.193 12:46:29 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:56.454 12:46:29 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:56.454 12:46:29 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:56.454 12:46:29 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 514770 00:18:56.454 12:46:29 -- target/nvmf_lvs_grow.sh@74 -- # wait 514770 00:18:56.454 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 514770 Killed "${NVMF_APP[@]}" "$@" 00:18:56.454 12:46:29 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:56.454 12:46:29 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:56.454 12:46:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:56.454 12:46:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:56.454 12:46:29 -- common/autotest_common.sh@10 -- # set +x 00:18:56.454 12:46:29 -- nvmf/common.sh@469 -- # nvmfpid=520750 00:18:56.454 12:46:29 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:56.454 12:46:29 -- nvmf/common.sh@470 -- # waitforlisten 520750 00:18:56.454 12:46:29 -- common/autotest_common.sh@829 -- # '[' -z 520750 ']' 00:18:56.454 12:46:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.454 12:46:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.454 12:46:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.454 12:46:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.454 12:46:29 -- common/autotest_common.sh@10 -- # set +x 00:18:56.454 [2024-11-20 12:46:29.450875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:56.454 [2024-11-20 12:46:29.450928] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.454 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.454 [2024-11-20 12:46:29.512472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.715 [2024-11-20 12:46:29.575935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:56.715 [2024-11-20 12:46:29.576056] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.715 [2024-11-20 12:46:29.576065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.716 [2024-11-20 12:46:29.576073] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.716 [2024-11-20 12:46:29.576097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.289 12:46:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.289 12:46:30 -- common/autotest_common.sh@862 -- # return 0 00:18:57.289 12:46:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:57.289 12:46:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.289 12:46:30 -- common/autotest_common.sh@10 -- # set +x 00:18:57.289 12:46:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.289 12:46:30 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:57.551 [2024-11-20 12:46:30.408964] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:57.551 [2024-11-20 12:46:30.409058] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:57.551 [2024-11-20 12:46:30.409090] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:57.551 12:46:30 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:57.551 12:46:30 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev ad757f72-f7d2-4e15-b753-bb38a533d7cb 00:18:57.551 12:46:30 -- common/autotest_common.sh@897 -- # local bdev_name=ad757f72-f7d2-4e15-b753-bb38a533d7cb 00:18:57.551 12:46:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:57.551 12:46:30 -- common/autotest_common.sh@899 -- # local i 00:18:57.551 12:46:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:57.551 12:46:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:57.551 12:46:30 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:57.551 12:46:30 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ad757f72-f7d2-4e15-b753-bb38a533d7cb -t 2000 00:18:57.812 [ 00:18:57.812 { 00:18:57.812 "name": "ad757f72-f7d2-4e15-b753-bb38a533d7cb", 00:18:57.812 "aliases": [ 00:18:57.812 "lvs/lvol" 00:18:57.812 ], 00:18:57.812 "product_name": "Logical Volume", 00:18:57.812 "block_size": 4096, 00:18:57.812 "num_blocks": 38912, 00:18:57.812 "uuid": "ad757f72-f7d2-4e15-b753-bb38a533d7cb", 00:18:57.812 "assigned_rate_limits": { 00:18:57.812 "rw_ios_per_sec": 0, 00:18:57.812 "rw_mbytes_per_sec": 0, 00:18:57.812 "r_mbytes_per_sec": 0, 00:18:57.812 "w_mbytes_per_sec": 0 00:18:57.812 }, 00:18:57.812 "claimed": false, 00:18:57.812 "zoned": false, 00:18:57.812 "supported_io_types": { 00:18:57.812 "read": true, 00:18:57.812 "write": true, 00:18:57.812 "unmap": true, 00:18:57.812 "write_zeroes": true, 00:18:57.812 "flush": false, 00:18:57.812 "reset": true, 00:18:57.812 "compare": false, 00:18:57.812 "compare_and_write": false, 00:18:57.812 "abort": false, 00:18:57.812 "nvme_admin": false, 00:18:57.812 "nvme_io": false 00:18:57.812 }, 00:18:57.812 "driver_specific": { 00:18:57.812 "lvol": { 00:18:57.812 "lvol_store_uuid": "2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3", 00:18:57.812 "base_bdev": "aio_bdev", 00:18:57.812 "thin_provision": false, 00:18:57.812 "snapshot": false, 00:18:57.812 "clone": false, 00:18:57.812 "esnap_clone": false 00:18:57.812 } 00:18:57.812 } 00:18:57.812 } 00:18:57.812 ] 00:18:57.812 12:46:30 -- common/autotest_common.sh@905 -- # return 0 00:18:57.812 12:46:30 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:57.812 12:46:30 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:57.812 12:46:30 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:57.812 12:46:30 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:57.812 12:46:30 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:58.073 12:46:31 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:58.073 12:46:31 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:58.334 [2024-11-20 12:46:31.220993] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:58.334 12:46:31 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:58.334 12:46:31 -- common/autotest_common.sh@650 -- # local es=0 00:18:58.334 12:46:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:58.334 12:46:31 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:58.334 12:46:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.334 12:46:31 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:58.334 12:46:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.335 12:46:31 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:58.335 12:46:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.335 12:46:31 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:58.335 12:46:31 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:58.335 12:46:31 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:58.335 request: 00:18:58.335 { 00:18:58.335 "uuid": "2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3", 00:18:58.335 "method": "bdev_lvol_get_lvstores", 00:18:58.335 "req_id": 1 00:18:58.335 } 00:18:58.335 Got JSON-RPC error response 00:18:58.335 response: 00:18:58.335 { 00:18:58.335 "code": -19, 00:18:58.335 "message": "No such device" 00:18:58.335 } 00:18:58.335 12:46:31 -- common/autotest_common.sh@653 -- # es=1 00:18:58.335 12:46:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:58.335 12:46:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:58.335 12:46:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:58.335 12:46:31 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:58.595 aio_bdev 00:18:58.595 12:46:31 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ad757f72-f7d2-4e15-b753-bb38a533d7cb 00:18:58.595 12:46:31 -- common/autotest_common.sh@897 -- # local bdev_name=ad757f72-f7d2-4e15-b753-bb38a533d7cb 00:18:58.595 12:46:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:58.595 12:46:31 -- common/autotest_common.sh@899 -- # local i 00:18:58.595 12:46:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:58.595 12:46:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:58.595 12:46:31 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:58.857 12:46:31 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ad757f72-f7d2-4e15-b753-bb38a533d7cb -t 2000 00:18:58.857 [ 00:18:58.857 { 00:18:58.857 "name": "ad757f72-f7d2-4e15-b753-bb38a533d7cb", 00:18:58.857 "aliases": [ 00:18:58.857 "lvs/lvol" 00:18:58.857 ], 00:18:58.857 "product_name": "Logical Volume", 00:18:58.857 "block_size": 4096, 00:18:58.857 "num_blocks": 38912, 00:18:58.857 "uuid": "ad757f72-f7d2-4e15-b753-bb38a533d7cb", 00:18:58.857 "assigned_rate_limits": { 00:18:58.857 "rw_ios_per_sec": 0, 00:18:58.857 "rw_mbytes_per_sec": 0, 00:18:58.857 "r_mbytes_per_sec": 0, 00:18:58.857 "w_mbytes_per_sec": 0 00:18:58.857 }, 00:18:58.857 "claimed": false, 00:18:58.857 "zoned": false, 00:18:58.857 "supported_io_types": { 00:18:58.857 "read": true, 00:18:58.857 "write": true, 00:18:58.857 "unmap": true, 00:18:58.857 "write_zeroes": true, 00:18:58.857 "flush": false, 00:18:58.857 "reset": true, 00:18:58.857 "compare": false, 00:18:58.857 "compare_and_write": false, 00:18:58.857 "abort": false, 00:18:58.857 "nvme_admin": false, 00:18:58.857 "nvme_io": false 00:18:58.857 }, 00:18:58.857 "driver_specific": { 00:18:58.857 "lvol": { 00:18:58.857 "lvol_store_uuid": "2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3", 00:18:58.857 "base_bdev": "aio_bdev", 00:18:58.857 "thin_provision": false, 00:18:58.857 "snapshot": false, 00:18:58.857 "clone": false, 00:18:58.857 "esnap_clone": false 00:18:58.857 } 00:18:58.857 } 00:18:58.857 } 00:18:58.857 ] 00:18:58.857 12:46:31 -- common/autotest_common.sh@905 -- # return 0 00:18:58.857 12:46:31 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:58.857 12:46:31 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:59.118 12:46:32 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:59.118 12:46:32 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:59.118 12:46:32 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:59.118 12:46:32 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:59.118 12:46:32 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ad757f72-f7d2-4e15-b753-bb38a533d7cb 00:18:59.379 12:46:32 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2e33f5e2-c8b3-4ab5-9db6-88a511c0e3e3 00:18:59.638 12:46:32 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:59.639 12:46:32 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:59.639 00:18:59.639 real 0m16.843s 00:18:59.639 user 0m44.606s 00:18:59.639 sys 0m2.299s 00:18:59.639 12:46:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:59.639 12:46:32 -- common/autotest_common.sh@10 -- # set +x 00:18:59.639 ************************************ 00:18:59.639 END TEST lvs_grow_dirty 00:18:59.639 ************************************ 00:18:59.899 12:46:32 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:59.899 12:46:32 -- common/autotest_common.sh@806 -- # type=--id 00:18:59.899 12:46:32 -- common/autotest_common.sh@807 -- # id=0 00:18:59.899 12:46:32 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:59.899 12:46:32 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:59.899 12:46:32 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:59.899 12:46:32 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:59.899 12:46:32 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:59.899 12:46:32 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:59.899 nvmf_trace.0 00:18:59.899 12:46:32 -- common/autotest_common.sh@821 -- # return 0 00:18:59.899 12:46:32 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:59.899 12:46:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:59.899 12:46:32 -- nvmf/common.sh@116 -- # sync 00:18:59.899 12:46:32 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:59.899 12:46:32 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:59.899 12:46:32 -- nvmf/common.sh@119 -- # set +e 00:18:59.899 12:46:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:59.899 12:46:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:59.899 rmmod nvme_rdma 00:18:59.899 rmmod nvme_fabrics 00:18:59.899 12:46:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:59.899 12:46:32 -- nvmf/common.sh@123 -- # set -e 00:18:59.899 12:46:32 -- nvmf/common.sh@124 -- # return 0 00:18:59.899 12:46:32 -- nvmf/common.sh@477 -- # '[' -n 520750 ']' 00:18:59.899 12:46:32 -- nvmf/common.sh@478 -- # killprocess 520750 00:18:59.899 12:46:32 -- common/autotest_common.sh@936 -- # '[' -z 520750 ']' 00:18:59.899 12:46:32 -- common/autotest_common.sh@940 -- # kill -0 520750 00:18:59.899 12:46:32 -- common/autotest_common.sh@941 -- # uname 00:18:59.899 12:46:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:59.899 12:46:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 520750 00:18:59.899 12:46:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:59.899 12:46:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:59.899 12:46:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 520750' 00:18:59.899 killing process with pid 520750 00:18:59.899 12:46:32 -- common/autotest_common.sh@955 -- # kill 520750 00:18:59.899 12:46:32 -- common/autotest_common.sh@960 -- # wait 520750 00:19:00.160 12:46:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:00.160 12:46:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:00.160 00:19:00.160 real 0m40.852s 00:19:00.160 user 1m5.853s 00:19:00.160 sys 0m8.967s 00:19:00.160 12:46:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:00.160 12:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:00.160 ************************************ 00:19:00.160 END TEST nvmf_lvs_grow 00:19:00.160 ************************************ 00:19:00.160 12:46:33 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:19:00.160 12:46:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:00.160 12:46:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:00.160 12:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:00.160 ************************************ 00:19:00.160 START TEST nvmf_bdev_io_wait 00:19:00.160 ************************************ 00:19:00.160 12:46:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:19:00.160 * Looking for test storage... 00:19:00.160 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:00.160 12:46:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:00.160 12:46:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:00.160 12:46:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:00.160 12:46:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:00.160 12:46:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:00.160 12:46:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:00.160 12:46:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:00.160 12:46:33 -- scripts/common.sh@335 -- # IFS=.-: 00:19:00.160 12:46:33 -- scripts/common.sh@335 -- # read -ra ver1 00:19:00.160 12:46:33 -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.160 12:46:33 -- scripts/common.sh@336 -- # read -ra ver2 00:19:00.160 12:46:33 -- scripts/common.sh@337 -- # local 'op=<' 00:19:00.160 12:46:33 -- scripts/common.sh@339 -- # ver1_l=2 00:19:00.160 12:46:33 -- scripts/common.sh@340 -- # ver2_l=1 00:19:00.160 12:46:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:00.160 12:46:33 -- scripts/common.sh@343 -- # case "$op" in 00:19:00.160 12:46:33 -- scripts/common.sh@344 -- # : 1 00:19:00.160 12:46:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:00.160 12:46:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.160 12:46:33 -- scripts/common.sh@364 -- # decimal 1 00:19:00.160 12:46:33 -- scripts/common.sh@352 -- # local d=1 00:19:00.160 12:46:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.160 12:46:33 -- scripts/common.sh@354 -- # echo 1 00:19:00.160 12:46:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:00.160 12:46:33 -- scripts/common.sh@365 -- # decimal 2 00:19:00.160 12:46:33 -- scripts/common.sh@352 -- # local d=2 00:19:00.160 12:46:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.160 12:46:33 -- scripts/common.sh@354 -- # echo 2 00:19:00.160 12:46:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:00.160 12:46:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:00.160 12:46:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:00.160 12:46:33 -- scripts/common.sh@367 -- # return 0 00:19:00.160 12:46:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.160 12:46:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:00.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.160 --rc genhtml_branch_coverage=1 00:19:00.160 --rc genhtml_function_coverage=1 00:19:00.160 --rc genhtml_legend=1 00:19:00.160 --rc geninfo_all_blocks=1 00:19:00.160 --rc geninfo_unexecuted_blocks=1 00:19:00.160 00:19:00.160 ' 00:19:00.160 12:46:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:00.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.160 --rc genhtml_branch_coverage=1 00:19:00.160 --rc genhtml_function_coverage=1 00:19:00.160 --rc genhtml_legend=1 00:19:00.160 --rc geninfo_all_blocks=1 00:19:00.160 --rc geninfo_unexecuted_blocks=1 00:19:00.160 00:19:00.160 ' 00:19:00.160 12:46:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:00.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.160 --rc genhtml_branch_coverage=1 00:19:00.160 --rc genhtml_function_coverage=1 00:19:00.160 --rc genhtml_legend=1 00:19:00.160 --rc geninfo_all_blocks=1 00:19:00.160 --rc geninfo_unexecuted_blocks=1 00:19:00.160 00:19:00.160 ' 00:19:00.160 12:46:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:00.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.160 --rc genhtml_branch_coverage=1 00:19:00.160 --rc genhtml_function_coverage=1 00:19:00.160 --rc genhtml_legend=1 00:19:00.160 --rc geninfo_all_blocks=1 00:19:00.160 --rc geninfo_unexecuted_blocks=1 00:19:00.160 00:19:00.160 ' 00:19:00.160 12:46:33 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.160 12:46:33 -- nvmf/common.sh@7 -- # uname -s 00:19:00.161 12:46:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.161 12:46:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.161 12:46:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.161 12:46:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.161 12:46:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.161 12:46:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.161 12:46:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.161 12:46:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.161 12:46:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.161 12:46:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.423 12:46:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:00.423 12:46:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:00.423 12:46:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.423 12:46:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.423 12:46:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.423 12:46:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:00.423 12:46:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.423 12:46:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.423 12:46:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.423 12:46:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.423 12:46:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.423 12:46:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.423 12:46:33 -- paths/export.sh@5 -- # export PATH 00:19:00.423 12:46:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.423 12:46:33 -- nvmf/common.sh@46 -- # : 0 00:19:00.423 12:46:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:00.423 12:46:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:00.423 12:46:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:00.423 12:46:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.423 12:46:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.423 12:46:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:00.423 12:46:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:00.423 12:46:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:00.423 12:46:33 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.423 12:46:33 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.423 12:46:33 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:00.423 12:46:33 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:00.423 12:46:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.423 12:46:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:00.423 12:46:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:00.423 12:46:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:00.423 12:46:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.423 12:46:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.423 12:46:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.423 12:46:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:00.423 12:46:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:00.423 12:46:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:00.423 12:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:07.014 12:46:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:07.014 12:46:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:07.014 12:46:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:07.014 12:46:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:07.014 12:46:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:07.014 12:46:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:07.014 12:46:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:07.014 12:46:40 -- nvmf/common.sh@294 -- # net_devs=() 00:19:07.014 12:46:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:07.014 12:46:40 -- nvmf/common.sh@295 -- # e810=() 00:19:07.014 12:46:40 -- nvmf/common.sh@295 -- # local -ga e810 00:19:07.014 12:46:40 -- nvmf/common.sh@296 -- # x722=() 00:19:07.014 12:46:40 -- nvmf/common.sh@296 -- # local -ga x722 00:19:07.014 12:46:40 -- nvmf/common.sh@297 -- # mlx=() 00:19:07.014 12:46:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:07.014 12:46:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.014 12:46:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:07.014 12:46:40 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:07.014 12:46:40 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:07.014 12:46:40 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:07.014 12:46:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:07.014 12:46:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:07.014 12:46:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:07.014 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:07.014 12:46:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.014 12:46:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:07.014 12:46:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:07.014 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:07.014 12:46:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.014 12:46:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:07.014 12:46:40 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:07.014 12:46:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:07.015 12:46:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.015 12:46:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:07.015 12:46:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.015 12:46:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:07.015 Found net devices under 0000:98:00.0: mlx_0_0 00:19:07.015 12:46:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.015 12:46:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:07.015 12:46:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.015 12:46:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:07.015 12:46:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.015 12:46:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:07.015 Found net devices under 0000:98:00.1: mlx_0_1 00:19:07.015 12:46:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.015 12:46:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:07.015 12:46:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:07.015 12:46:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:07.015 12:46:40 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:07.015 12:46:40 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:07.015 12:46:40 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:07.015 12:46:40 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:07.015 12:46:40 -- nvmf/common.sh@57 -- # uname 00:19:07.015 12:46:40 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:07.015 12:46:40 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:07.015 12:46:40 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:07.015 12:46:40 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:07.015 12:46:40 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:07.015 12:46:40 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:07.015 12:46:40 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:07.015 12:46:40 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:07.015 12:46:40 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:07.015 12:46:40 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:07.015 12:46:40 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:07.015 12:46:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.015 12:46:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:07.015 12:46:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:07.015 12:46:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.015 12:46:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:07.015 12:46:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:07.015 12:46:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.015 12:46:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.015 12:46:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:07.015 12:46:40 -- nvmf/common.sh@104 -- # continue 2 00:19:07.015 12:46:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:07.015 12:46:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.015 12:46:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.015 12:46:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.015 12:46:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.015 12:46:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:07.015 12:46:40 -- nvmf/common.sh@104 -- # continue 2 00:19:07.015 12:46:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:07.015 12:46:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:07.015 12:46:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:07.015 12:46:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:07.015 12:46:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:07.015 12:46:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:07.015 12:46:40 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:07.015 12:46:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:07.015 12:46:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:07.276 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.276 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:19:07.276 altname enp152s0f0np0 00:19:07.276 altname ens817f0np0 00:19:07.276 inet 192.168.100.8/24 scope global mlx_0_0 00:19:07.276 valid_lft forever preferred_lft forever 00:19:07.276 12:46:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:07.276 12:46:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:07.276 12:46:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:07.276 12:46:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:07.276 12:46:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:07.276 12:46:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:07.276 12:46:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:07.276 12:46:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:07.276 12:46:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:07.276 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.276 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:19:07.276 altname enp152s0f1np1 00:19:07.276 altname ens817f1np1 00:19:07.276 inet 192.168.100.9/24 scope global mlx_0_1 00:19:07.276 valid_lft forever preferred_lft forever 00:19:07.276 12:46:40 -- nvmf/common.sh@410 -- # return 0 00:19:07.276 12:46:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:07.276 12:46:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:07.276 12:46:40 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:07.276 12:46:40 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:07.276 12:46:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:07.276 12:46:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.276 12:46:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:07.276 12:46:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:07.276 12:46:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.276 12:46:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:07.276 12:46:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:07.276 12:46:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.276 12:46:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.276 12:46:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:07.276 12:46:40 -- nvmf/common.sh@104 -- # continue 2 00:19:07.276 12:46:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:07.276 12:46:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.276 12:46:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.276 12:46:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.276 12:46:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.276 12:46:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:07.276 12:46:40 -- nvmf/common.sh@104 -- # continue 2 00:19:07.276 12:46:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:07.276 12:46:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:07.276 12:46:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:07.276 12:46:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:07.276 12:46:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:07.276 12:46:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:07.276 12:46:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:07.276 12:46:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:07.276 12:46:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:07.276 12:46:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:07.276 12:46:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:07.276 12:46:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:07.276 12:46:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:07.276 192.168.100.9' 00:19:07.276 12:46:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:07.276 192.168.100.9' 00:19:07.276 12:46:40 -- nvmf/common.sh@445 -- # head -n 1 00:19:07.276 12:46:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:07.276 12:46:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:07.276 192.168.100.9' 00:19:07.276 12:46:40 -- nvmf/common.sh@446 -- # tail -n +2 00:19:07.276 12:46:40 -- nvmf/common.sh@446 -- # head -n 1 00:19:07.276 12:46:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:07.276 12:46:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:07.276 12:46:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:07.276 12:46:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:07.276 12:46:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:07.276 12:46:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:07.276 12:46:40 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:07.276 12:46:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:07.276 12:46:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.276 12:46:40 -- common/autotest_common.sh@10 -- # set +x 00:19:07.276 12:46:40 -- nvmf/common.sh@469 -- # nvmfpid=525200 00:19:07.276 12:46:40 -- nvmf/common.sh@470 -- # waitforlisten 525200 00:19:07.276 12:46:40 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:07.276 12:46:40 -- common/autotest_common.sh@829 -- # '[' -z 525200 ']' 00:19:07.276 12:46:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.276 12:46:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.276 12:46:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.276 12:46:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.276 12:46:40 -- common/autotest_common.sh@10 -- # set +x 00:19:07.276 [2024-11-20 12:46:40.301129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:07.276 [2024-11-20 12:46:40.301182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.276 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.276 [2024-11-20 12:46:40.365189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.539 [2024-11-20 12:46:40.429141] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:07.539 [2024-11-20 12:46:40.429260] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.539 [2024-11-20 12:46:40.429269] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.539 [2024-11-20 12:46:40.429276] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.539 [2024-11-20 12:46:40.429436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.539 [2024-11-20 12:46:40.429667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.539 [2024-11-20 12:46:40.429821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.539 [2024-11-20 12:46:40.429823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.112 12:46:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.112 12:46:41 -- common/autotest_common.sh@862 -- # return 0 00:19:08.112 12:46:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:08.112 12:46:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.112 12:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.112 12:46:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.112 12:46:41 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:08.112 12:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.112 12:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.112 12:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.112 12:46:41 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:08.112 12:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.112 12:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.112 12:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.112 12:46:41 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:08.112 12:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.112 12:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.112 [2024-11-20 12:46:41.208175] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12757f0/0x1279ce0) succeed. 00:19:08.372 [2024-11-20 12:46:41.221927] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1276de0/0x12bb380) succeed. 00:19:08.372 12:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:08.372 12:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.372 12:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.372 Malloc0 00:19:08.372 12:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:08.372 12:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.372 12:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.372 12:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:08.372 12:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.372 12:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.372 12:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:08.372 12:46:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.372 12:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.372 [2024-11-20 12:46:41.411846] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:08.372 12:46:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=525485 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@30 -- # READ_PID=525488 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:08.372 12:46:41 -- nvmf/common.sh@520 -- # config=() 00:19:08.372 12:46:41 -- nvmf/common.sh@520 -- # local subsystem config 00:19:08.372 12:46:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:08.372 12:46:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:08.372 { 00:19:08.372 "params": { 00:19:08.372 "name": "Nvme$subsystem", 00:19:08.372 "trtype": "$TEST_TRANSPORT", 00:19:08.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.372 "adrfam": "ipv4", 00:19:08.372 "trsvcid": "$NVMF_PORT", 00:19:08.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.372 "hdgst": ${hdgst:-false}, 00:19:08.372 "ddgst": ${ddgst:-false} 00:19:08.372 }, 00:19:08.372 "method": "bdev_nvme_attach_controller" 00:19:08.372 } 00:19:08.372 EOF 00:19:08.372 )") 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=525490 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:08.372 12:46:41 -- nvmf/common.sh@520 -- # config=() 00:19:08.372 12:46:41 -- nvmf/common.sh@520 -- # local subsystem config 00:19:08.372 12:46:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:08.372 12:46:41 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=525494 00:19:08.372 12:46:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:08.372 { 00:19:08.372 "params": { 00:19:08.372 "name": "Nvme$subsystem", 00:19:08.372 "trtype": "$TEST_TRANSPORT", 00:19:08.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.372 "adrfam": "ipv4", 00:19:08.372 "trsvcid": "$NVMF_PORT", 00:19:08.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.373 "hdgst": ${hdgst:-false}, 00:19:08.373 "ddgst": ${ddgst:-false} 00:19:08.373 }, 00:19:08.373 "method": "bdev_nvme_attach_controller" 00:19:08.373 } 00:19:08.373 EOF 00:19:08.373 )") 00:19:08.373 12:46:41 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:08.373 12:46:41 -- target/bdev_io_wait.sh@35 -- # sync 00:19:08.373 12:46:41 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:08.373 12:46:41 -- nvmf/common.sh@520 -- # config=() 00:19:08.373 12:46:41 -- nvmf/common.sh@542 -- # cat 00:19:08.373 12:46:41 -- nvmf/common.sh@520 -- # local subsystem config 00:19:08.373 12:46:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:08.373 12:46:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:08.373 { 00:19:08.373 "params": { 00:19:08.373 "name": "Nvme$subsystem", 00:19:08.373 "trtype": "$TEST_TRANSPORT", 00:19:08.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.373 "adrfam": "ipv4", 00:19:08.373 "trsvcid": "$NVMF_PORT", 00:19:08.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.373 "hdgst": ${hdgst:-false}, 00:19:08.373 "ddgst": ${ddgst:-false} 00:19:08.373 }, 00:19:08.373 "method": "bdev_nvme_attach_controller" 00:19:08.373 } 00:19:08.373 EOF 00:19:08.373 )") 00:19:08.373 12:46:41 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:08.373 12:46:41 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:08.373 12:46:41 -- nvmf/common.sh@520 -- # config=() 00:19:08.373 12:46:41 -- nvmf/common.sh@520 -- # local subsystem config 00:19:08.373 12:46:41 -- nvmf/common.sh@542 -- # cat 00:19:08.373 12:46:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:08.373 12:46:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:08.373 { 00:19:08.373 "params": { 00:19:08.373 "name": "Nvme$subsystem", 00:19:08.373 "trtype": "$TEST_TRANSPORT", 00:19:08.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.373 "adrfam": "ipv4", 00:19:08.373 "trsvcid": "$NVMF_PORT", 00:19:08.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.373 "hdgst": ${hdgst:-false}, 00:19:08.373 "ddgst": ${ddgst:-false} 00:19:08.373 }, 00:19:08.373 "method": "bdev_nvme_attach_controller" 00:19:08.373 } 00:19:08.373 EOF 00:19:08.373 )") 00:19:08.373 12:46:41 -- nvmf/common.sh@542 -- # cat 00:19:08.373 12:46:41 -- target/bdev_io_wait.sh@37 -- # wait 525485 00:19:08.373 12:46:41 -- nvmf/common.sh@542 -- # cat 00:19:08.373 12:46:41 -- nvmf/common.sh@544 -- # jq . 00:19:08.373 12:46:41 -- nvmf/common.sh@544 -- # jq . 00:19:08.373 12:46:41 -- nvmf/common.sh@544 -- # jq . 00:19:08.373 12:46:41 -- nvmf/common.sh@545 -- # IFS=, 00:19:08.373 12:46:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:08.373 "params": { 00:19:08.373 "name": "Nvme1", 00:19:08.373 "trtype": "rdma", 00:19:08.373 "traddr": "192.168.100.8", 00:19:08.373 "adrfam": "ipv4", 00:19:08.373 "trsvcid": "4420", 00:19:08.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.373 "hdgst": false, 00:19:08.373 "ddgst": false 00:19:08.373 }, 00:19:08.373 "method": "bdev_nvme_attach_controller" 00:19:08.373 }' 00:19:08.373 12:46:41 -- nvmf/common.sh@544 -- # jq . 00:19:08.373 12:46:41 -- nvmf/common.sh@545 -- # IFS=, 00:19:08.373 12:46:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:08.373 "params": { 00:19:08.373 "name": "Nvme1", 00:19:08.373 "trtype": "rdma", 00:19:08.373 "traddr": "192.168.100.8", 00:19:08.373 "adrfam": "ipv4", 00:19:08.373 "trsvcid": "4420", 00:19:08.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.373 "hdgst": false, 00:19:08.373 "ddgst": false 00:19:08.373 }, 00:19:08.373 "method": "bdev_nvme_attach_controller" 00:19:08.373 }' 00:19:08.373 12:46:41 -- nvmf/common.sh@545 -- # IFS=, 00:19:08.373 12:46:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:08.373 "params": { 00:19:08.373 "name": "Nvme1", 00:19:08.373 "trtype": "rdma", 00:19:08.373 "traddr": "192.168.100.8", 00:19:08.373 "adrfam": "ipv4", 00:19:08.373 "trsvcid": "4420", 00:19:08.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.373 "hdgst": false, 00:19:08.373 "ddgst": false 00:19:08.373 }, 00:19:08.373 "method": "bdev_nvme_attach_controller" 00:19:08.373 }' 00:19:08.373 12:46:41 -- nvmf/common.sh@545 -- # IFS=, 00:19:08.373 12:46:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:08.373 "params": { 00:19:08.373 "name": "Nvme1", 00:19:08.373 "trtype": "rdma", 00:19:08.373 "traddr": "192.168.100.8", 00:19:08.373 "adrfam": "ipv4", 00:19:08.373 "trsvcid": "4420", 00:19:08.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.373 "hdgst": false, 00:19:08.373 "ddgst": false 00:19:08.373 }, 00:19:08.373 "method": "bdev_nvme_attach_controller" 00:19:08.373 }' 00:19:08.373 [2024-11-20 12:46:41.462507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:08.373 [2024-11-20 12:46:41.462555] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:08.373 [2024-11-20 12:46:41.464754] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:08.373 [2024-11-20 12:46:41.464813] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:08.373 [2024-11-20 12:46:41.464831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:08.373 [2024-11-20 12:46:41.464874] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:08.373 [2024-11-20 12:46:41.465214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:08.373 [2024-11-20 12:46:41.465257] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:08.632 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.632 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.632 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.632 [2024-11-20 12:46:41.596590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.632 [2024-11-20 12:46:41.617851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.632 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.632 [2024-11-20 12:46:41.645637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:08.632 [2024-11-20 12:46:41.665873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:08.632 [2024-11-20 12:46:41.684508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.632 [2024-11-20 12:46:41.732444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:08.893 [2024-11-20 12:46:41.745886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.893 [2024-11-20 12:46:41.795650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:08.893 Running I/O for 1 seconds... 00:19:08.893 Running I/O for 1 seconds... 00:19:08.893 Running I/O for 1 seconds... 00:19:08.893 Running I/O for 1 seconds... 00:19:09.836 00:19:09.836 Latency(us) 00:19:09.836 [2024-11-20T11:46:42.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.836 [2024-11-20T11:46:42.944Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:09.836 Nvme1n1 : 1.00 19564.41 76.42 0.00 0.00 6522.70 4532.91 16493.23 00:19:09.836 [2024-11-20T11:46:42.944Z] =================================================================================================================== 00:19:09.836 [2024-11-20T11:46:42.944Z] Total : 19564.41 76.42 0.00 0.00 6522.70 4532.91 16493.23 00:19:09.836 00:19:09.836 Latency(us) 00:19:09.836 [2024-11-20T11:46:42.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.836 [2024-11-20T11:46:42.944Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:09.836 Nvme1n1 : 1.00 19690.84 76.92 0.00 0.00 6482.89 4068.69 17803.95 00:19:09.836 [2024-11-20T11:46:42.944Z] =================================================================================================================== 00:19:09.836 [2024-11-20T11:46:42.944Z] Total : 19690.84 76.92 0.00 0.00 6482.89 4068.69 17803.95 00:19:09.836 00:19:09.837 Latency(us) 00:19:09.837 [2024-11-20T11:46:42.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.837 [2024-11-20T11:46:42.945Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:09.837 Nvme1n1 : 1.00 190543.41 744.31 0.00 0.00 668.74 266.24 2252.80 00:19:09.837 [2024-11-20T11:46:42.945Z] =================================================================================================================== 00:19:09.837 [2024-11-20T11:46:42.945Z] Total : 190543.41 744.31 0.00 0.00 668.74 266.24 2252.80 00:19:10.098 00:19:10.098 Latency(us) 00:19:10.098 [2024-11-20T11:46:43.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.098 [2024-11-20T11:46:43.206Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:10.098 Nvme1n1 : 1.00 24802.94 96.89 0.00 0.00 5148.36 3112.96 17039.36 00:19:10.098 [2024-11-20T11:46:43.206Z] =================================================================================================================== 00:19:10.098 [2024-11-20T11:46:43.206Z] Total : 24802.94 96.89 0.00 0.00 5148.36 3112.96 17039.36 00:19:10.098 12:46:43 -- target/bdev_io_wait.sh@38 -- # wait 525488 00:19:10.098 12:46:43 -- target/bdev_io_wait.sh@39 -- # wait 525490 00:19:10.098 12:46:43 -- target/bdev_io_wait.sh@40 -- # wait 525494 00:19:10.098 12:46:43 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.098 12:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.098 12:46:43 -- common/autotest_common.sh@10 -- # set +x 00:19:10.098 12:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.098 12:46:43 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:10.098 12:46:43 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:10.098 12:46:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:10.098 12:46:43 -- nvmf/common.sh@116 -- # sync 00:19:10.098 12:46:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:10.098 12:46:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:10.098 12:46:43 -- nvmf/common.sh@119 -- # set +e 00:19:10.098 12:46:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:10.098 12:46:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:10.098 rmmod nvme_rdma 00:19:10.098 rmmod nvme_fabrics 00:19:10.359 12:46:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:10.359 12:46:43 -- nvmf/common.sh@123 -- # set -e 00:19:10.359 12:46:43 -- nvmf/common.sh@124 -- # return 0 00:19:10.359 12:46:43 -- nvmf/common.sh@477 -- # '[' -n 525200 ']' 00:19:10.359 12:46:43 -- nvmf/common.sh@478 -- # killprocess 525200 00:19:10.359 12:46:43 -- common/autotest_common.sh@936 -- # '[' -z 525200 ']' 00:19:10.359 12:46:43 -- common/autotest_common.sh@940 -- # kill -0 525200 00:19:10.359 12:46:43 -- common/autotest_common.sh@941 -- # uname 00:19:10.359 12:46:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:10.359 12:46:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 525200 00:19:10.359 12:46:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:10.359 12:46:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:10.359 12:46:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 525200' 00:19:10.359 killing process with pid 525200 00:19:10.359 12:46:43 -- common/autotest_common.sh@955 -- # kill 525200 00:19:10.359 12:46:43 -- common/autotest_common.sh@960 -- # wait 525200 00:19:10.620 12:46:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:10.620 12:46:43 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:10.620 00:19:10.620 real 0m10.429s 00:19:10.620 user 0m19.591s 00:19:10.620 sys 0m6.347s 00:19:10.620 12:46:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:10.620 12:46:43 -- common/autotest_common.sh@10 -- # set +x 00:19:10.620 ************************************ 00:19:10.620 END TEST nvmf_bdev_io_wait 00:19:10.620 ************************************ 00:19:10.620 12:46:43 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:19:10.620 12:46:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:10.620 12:46:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.620 12:46:43 -- common/autotest_common.sh@10 -- # set +x 00:19:10.620 ************************************ 00:19:10.620 START TEST nvmf_queue_depth 00:19:10.620 ************************************ 00:19:10.620 12:46:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:19:10.620 * Looking for test storage... 00:19:10.620 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:10.620 12:46:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:10.620 12:46:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:10.620 12:46:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:10.620 12:46:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:10.620 12:46:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:10.620 12:46:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:10.620 12:46:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:10.620 12:46:43 -- scripts/common.sh@335 -- # IFS=.-: 00:19:10.620 12:46:43 -- scripts/common.sh@335 -- # read -ra ver1 00:19:10.620 12:46:43 -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.620 12:46:43 -- scripts/common.sh@336 -- # read -ra ver2 00:19:10.620 12:46:43 -- scripts/common.sh@337 -- # local 'op=<' 00:19:10.620 12:46:43 -- scripts/common.sh@339 -- # ver1_l=2 00:19:10.620 12:46:43 -- scripts/common.sh@340 -- # ver2_l=1 00:19:10.620 12:46:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:10.620 12:46:43 -- scripts/common.sh@343 -- # case "$op" in 00:19:10.620 12:46:43 -- scripts/common.sh@344 -- # : 1 00:19:10.620 12:46:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:10.620 12:46:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.882 12:46:43 -- scripts/common.sh@364 -- # decimal 1 00:19:10.882 12:46:43 -- scripts/common.sh@352 -- # local d=1 00:19:10.882 12:46:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.882 12:46:43 -- scripts/common.sh@354 -- # echo 1 00:19:10.883 12:46:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:10.883 12:46:43 -- scripts/common.sh@365 -- # decimal 2 00:19:10.883 12:46:43 -- scripts/common.sh@352 -- # local d=2 00:19:10.883 12:46:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.883 12:46:43 -- scripts/common.sh@354 -- # echo 2 00:19:10.883 12:46:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:10.883 12:46:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:10.883 12:46:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:10.883 12:46:43 -- scripts/common.sh@367 -- # return 0 00:19:10.883 12:46:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.883 12:46:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:10.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.883 --rc genhtml_branch_coverage=1 00:19:10.883 --rc genhtml_function_coverage=1 00:19:10.883 --rc genhtml_legend=1 00:19:10.883 --rc geninfo_all_blocks=1 00:19:10.883 --rc geninfo_unexecuted_blocks=1 00:19:10.883 00:19:10.883 ' 00:19:10.883 12:46:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:10.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.883 --rc genhtml_branch_coverage=1 00:19:10.883 --rc genhtml_function_coverage=1 00:19:10.883 --rc genhtml_legend=1 00:19:10.883 --rc geninfo_all_blocks=1 00:19:10.883 --rc geninfo_unexecuted_blocks=1 00:19:10.883 00:19:10.883 ' 00:19:10.883 12:46:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:10.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.883 --rc genhtml_branch_coverage=1 00:19:10.883 --rc genhtml_function_coverage=1 00:19:10.883 --rc genhtml_legend=1 00:19:10.883 --rc geninfo_all_blocks=1 00:19:10.883 --rc geninfo_unexecuted_blocks=1 00:19:10.883 00:19:10.883 ' 00:19:10.883 12:46:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:10.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.883 --rc genhtml_branch_coverage=1 00:19:10.883 --rc genhtml_function_coverage=1 00:19:10.883 --rc genhtml_legend=1 00:19:10.883 --rc geninfo_all_blocks=1 00:19:10.883 --rc geninfo_unexecuted_blocks=1 00:19:10.883 00:19:10.883 ' 00:19:10.883 12:46:43 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.883 12:46:43 -- nvmf/common.sh@7 -- # uname -s 00:19:10.883 12:46:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.883 12:46:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.883 12:46:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.883 12:46:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.883 12:46:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.883 12:46:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.883 12:46:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.883 12:46:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.883 12:46:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.883 12:46:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.883 12:46:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:10.883 12:46:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:10.883 12:46:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.883 12:46:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.883 12:46:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.883 12:46:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:10.883 12:46:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.883 12:46:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.883 12:46:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.883 12:46:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.883 12:46:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.883 12:46:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.883 12:46:43 -- paths/export.sh@5 -- # export PATH 00:19:10.883 12:46:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.883 12:46:43 -- nvmf/common.sh@46 -- # : 0 00:19:10.883 12:46:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:10.883 12:46:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:10.883 12:46:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:10.883 12:46:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.883 12:46:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.883 12:46:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:10.883 12:46:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:10.883 12:46:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:10.883 12:46:43 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:10.883 12:46:43 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:10.883 12:46:43 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.883 12:46:43 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:10.883 12:46:43 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:10.883 12:46:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.883 12:46:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:10.883 12:46:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:10.883 12:46:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:10.883 12:46:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.883 12:46:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.883 12:46:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.883 12:46:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:10.883 12:46:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:10.883 12:46:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:10.883 12:46:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.026 12:46:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:19.026 12:46:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:19.026 12:46:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:19.026 12:46:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:19.026 12:46:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:19.026 12:46:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:19.026 12:46:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:19.026 12:46:50 -- nvmf/common.sh@294 -- # net_devs=() 00:19:19.026 12:46:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:19.026 12:46:50 -- nvmf/common.sh@295 -- # e810=() 00:19:19.026 12:46:50 -- nvmf/common.sh@295 -- # local -ga e810 00:19:19.026 12:46:50 -- nvmf/common.sh@296 -- # x722=() 00:19:19.026 12:46:50 -- nvmf/common.sh@296 -- # local -ga x722 00:19:19.026 12:46:50 -- nvmf/common.sh@297 -- # mlx=() 00:19:19.026 12:46:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:19.026 12:46:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.026 12:46:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:19.026 12:46:50 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:19.026 12:46:50 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:19.026 12:46:50 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:19.026 12:46:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:19.026 12:46:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:19.026 12:46:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:19.026 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:19.026 12:46:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:19.026 12:46:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:19.026 12:46:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:19.026 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:19.026 12:46:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:19.026 12:46:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:19.026 12:46:50 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:19.026 12:46:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:19.026 12:46:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.027 12:46:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:19.027 12:46:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.027 12:46:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:19.027 Found net devices under 0000:98:00.0: mlx_0_0 00:19:19.027 12:46:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.027 12:46:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.027 12:46:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:19.027 12:46:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.027 12:46:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:19.027 Found net devices under 0000:98:00.1: mlx_0_1 00:19:19.027 12:46:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.027 12:46:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:19.027 12:46:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:19.027 12:46:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:19.027 12:46:50 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:19.027 12:46:50 -- nvmf/common.sh@57 -- # uname 00:19:19.027 12:46:50 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:19.027 12:46:50 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:19.027 12:46:50 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:19.027 12:46:50 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:19.027 12:46:50 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:19.027 12:46:50 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:19.027 12:46:50 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:19.027 12:46:50 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:19.027 12:46:50 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:19.027 12:46:50 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:19.027 12:46:50 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:19.027 12:46:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:19.027 12:46:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:19.027 12:46:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:19.027 12:46:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:19.027 12:46:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:19.027 12:46:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:19.027 12:46:50 -- nvmf/common.sh@104 -- # continue 2 00:19:19.027 12:46:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:19.027 12:46:50 -- nvmf/common.sh@104 -- # continue 2 00:19:19.027 12:46:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:19.027 12:46:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:19.027 12:46:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:19.027 12:46:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:19.027 12:46:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:19.027 12:46:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:19.027 12:46:50 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:19.027 12:46:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:19.027 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:19.027 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:19:19.027 altname enp152s0f0np0 00:19:19.027 altname ens817f0np0 00:19:19.027 inet 192.168.100.8/24 scope global mlx_0_0 00:19:19.027 valid_lft forever preferred_lft forever 00:19:19.027 12:46:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:19.027 12:46:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:19.027 12:46:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:19.027 12:46:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:19.027 12:46:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:19.027 12:46:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:19.027 12:46:50 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:19.027 12:46:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:19.027 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:19.027 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:19:19.027 altname enp152s0f1np1 00:19:19.027 altname ens817f1np1 00:19:19.027 inet 192.168.100.9/24 scope global mlx_0_1 00:19:19.027 valid_lft forever preferred_lft forever 00:19:19.027 12:46:50 -- nvmf/common.sh@410 -- # return 0 00:19:19.027 12:46:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:19.027 12:46:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:19.027 12:46:50 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:19.027 12:46:50 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:19.027 12:46:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:19.027 12:46:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:19.027 12:46:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:19.027 12:46:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:19.027 12:46:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:19.027 12:46:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:19.027 12:46:50 -- nvmf/common.sh@104 -- # continue 2 00:19:19.027 12:46:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:19.027 12:46:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:19.027 12:46:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:19.027 12:46:50 -- nvmf/common.sh@104 -- # continue 2 00:19:19.027 12:46:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:19.027 12:46:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:19.027 12:46:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:19.027 12:46:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:19.027 12:46:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:19.027 12:46:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:19.027 12:46:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:19.027 12:46:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:19.027 12:46:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:19.027 12:46:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:19.027 12:46:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:19.027 12:46:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:19.027 12:46:51 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:19.027 192.168.100.9' 00:19:19.027 12:46:51 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:19.027 192.168.100.9' 00:19:19.027 12:46:51 -- nvmf/common.sh@445 -- # head -n 1 00:19:19.027 12:46:51 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:19.027 12:46:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:19.027 192.168.100.9' 00:19:19.027 12:46:51 -- nvmf/common.sh@446 -- # tail -n +2 00:19:19.027 12:46:51 -- nvmf/common.sh@446 -- # head -n 1 00:19:19.027 12:46:51 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:19.027 12:46:51 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:19.027 12:46:51 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:19.027 12:46:51 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:19.027 12:46:51 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:19.027 12:46:51 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:19.027 12:46:51 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:19.027 12:46:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:19.027 12:46:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:19.027 12:46:51 -- common/autotest_common.sh@10 -- # set +x 00:19:19.027 12:46:51 -- nvmf/common.sh@469 -- # nvmfpid=529628 00:19:19.027 12:46:51 -- nvmf/common.sh@470 -- # waitforlisten 529628 00:19:19.027 12:46:51 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:19.027 12:46:51 -- common/autotest_common.sh@829 -- # '[' -z 529628 ']' 00:19:19.027 12:46:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.027 12:46:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.027 12:46:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.027 12:46:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.027 12:46:51 -- common/autotest_common.sh@10 -- # set +x 00:19:19.027 [2024-11-20 12:46:51.129794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:19.027 [2024-11-20 12:46:51.129855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.027 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.027 [2024-11-20 12:46:51.187699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.027 [2024-11-20 12:46:51.251406] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:19.027 [2024-11-20 12:46:51.251503] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.027 [2024-11-20 12:46:51.251508] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.027 [2024-11-20 12:46:51.251514] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.027 [2024-11-20 12:46:51.251530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.027 12:46:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.028 12:46:51 -- common/autotest_common.sh@862 -- # return 0 00:19:19.028 12:46:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:19.028 12:46:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:19.028 12:46:51 -- common/autotest_common.sh@10 -- # set +x 00:19:19.028 12:46:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.028 12:46:51 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:19.028 12:46:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.028 12:46:51 -- common/autotest_common.sh@10 -- # set +x 00:19:19.028 [2024-11-20 12:46:52.017184] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f49950/0x1f4de40) succeed. 00:19:19.028 [2024-11-20 12:46:52.026263] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f4ae50/0x1f8f4e0) succeed. 00:19:19.028 12:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.028 12:46:52 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:19.028 12:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.028 12:46:52 -- common/autotest_common.sh@10 -- # set +x 00:19:19.028 Malloc0 00:19:19.028 12:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.028 12:46:52 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:19.028 12:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.028 12:46:52 -- common/autotest_common.sh@10 -- # set +x 00:19:19.028 12:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.028 12:46:52 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:19.028 12:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.028 12:46:52 -- common/autotest_common.sh@10 -- # set +x 00:19:19.028 12:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.028 12:46:52 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:19.028 12:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.028 12:46:52 -- common/autotest_common.sh@10 -- # set +x 00:19:19.028 [2024-11-20 12:46:52.130578] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:19.287 12:46:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.287 12:46:52 -- target/queue_depth.sh@30 -- # bdevperf_pid=529929 00:19:19.287 12:46:52 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:19.287 12:46:52 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:19.287 12:46:52 -- target/queue_depth.sh@33 -- # waitforlisten 529929 /var/tmp/bdevperf.sock 00:19:19.287 12:46:52 -- common/autotest_common.sh@829 -- # '[' -z 529929 ']' 00:19:19.287 12:46:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.287 12:46:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.287 12:46:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.288 12:46:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.288 12:46:52 -- common/autotest_common.sh@10 -- # set +x 00:19:19.288 [2024-11-20 12:46:52.180529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:19.288 [2024-11-20 12:46:52.180584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529929 ] 00:19:19.288 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.288 [2024-11-20 12:46:52.245423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.288 [2024-11-20 12:46:52.318011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.228 12:46:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.228 12:46:52 -- common/autotest_common.sh@862 -- # return 0 00:19:20.228 12:46:52 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:20.228 12:46:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.228 12:46:52 -- common/autotest_common.sh@10 -- # set +x 00:19:20.228 NVMe0n1 00:19:20.228 12:46:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.228 12:46:53 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.228 Running I/O for 10 seconds... 00:19:30.231 00:19:30.231 Latency(us) 00:19:30.231 [2024-11-20T11:47:03.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.231 [2024-11-20T11:47:03.339Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:30.231 Verification LBA range: start 0x0 length 0x4000 00:19:30.231 NVMe0n1 : 10.03 25021.25 97.74 0.00 0.00 40828.16 6990.51 39103.15 00:19:30.231 [2024-11-20T11:47:03.339Z] =================================================================================================================== 00:19:30.231 [2024-11-20T11:47:03.339Z] Total : 25021.25 97.74 0.00 0.00 40828.16 6990.51 39103.15 00:19:30.231 0 00:19:30.231 12:47:03 -- target/queue_depth.sh@39 -- # killprocess 529929 00:19:30.231 12:47:03 -- common/autotest_common.sh@936 -- # '[' -z 529929 ']' 00:19:30.231 12:47:03 -- common/autotest_common.sh@940 -- # kill -0 529929 00:19:30.231 12:47:03 -- common/autotest_common.sh@941 -- # uname 00:19:30.231 12:47:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:30.231 12:47:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 529929 00:19:30.231 12:47:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:30.231 12:47:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:30.231 12:47:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 529929' 00:19:30.231 killing process with pid 529929 00:19:30.231 12:47:03 -- common/autotest_common.sh@955 -- # kill 529929 00:19:30.231 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.231 00:19:30.231 Latency(us) 00:19:30.231 [2024-11-20T11:47:03.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.231 [2024-11-20T11:47:03.339Z] =================================================================================================================== 00:19:30.231 [2024-11-20T11:47:03.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.231 12:47:03 -- common/autotest_common.sh@960 -- # wait 529929 00:19:30.492 12:47:03 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:30.492 12:47:03 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:30.493 12:47:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:30.493 12:47:03 -- nvmf/common.sh@116 -- # sync 00:19:30.493 12:47:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:30.493 12:47:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:30.493 12:47:03 -- nvmf/common.sh@119 -- # set +e 00:19:30.493 12:47:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:30.493 12:47:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:30.493 rmmod nvme_rdma 00:19:30.493 rmmod nvme_fabrics 00:19:30.493 12:47:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:30.493 12:47:03 -- nvmf/common.sh@123 -- # set -e 00:19:30.493 12:47:03 -- nvmf/common.sh@124 -- # return 0 00:19:30.493 12:47:03 -- nvmf/common.sh@477 -- # '[' -n 529628 ']' 00:19:30.493 12:47:03 -- nvmf/common.sh@478 -- # killprocess 529628 00:19:30.493 12:47:03 -- common/autotest_common.sh@936 -- # '[' -z 529628 ']' 00:19:30.493 12:47:03 -- common/autotest_common.sh@940 -- # kill -0 529628 00:19:30.493 12:47:03 -- common/autotest_common.sh@941 -- # uname 00:19:30.493 12:47:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:30.493 12:47:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 529628 00:19:30.493 12:47:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:30.493 12:47:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:30.493 12:47:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 529628' 00:19:30.493 killing process with pid 529628 00:19:30.493 12:47:03 -- common/autotest_common.sh@955 -- # kill 529628 00:19:30.493 12:47:03 -- common/autotest_common.sh@960 -- # wait 529628 00:19:30.754 12:47:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:30.754 12:47:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:30.754 00:19:30.754 real 0m20.168s 00:19:30.754 user 0m26.259s 00:19:30.754 sys 0m6.162s 00:19:30.754 12:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:30.754 12:47:03 -- common/autotest_common.sh@10 -- # set +x 00:19:30.754 ************************************ 00:19:30.754 END TEST nvmf_queue_depth 00:19:30.754 ************************************ 00:19:30.754 12:47:03 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:30.754 12:47:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:30.754 12:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:30.754 12:47:03 -- common/autotest_common.sh@10 -- # set +x 00:19:30.754 ************************************ 00:19:30.754 START TEST nvmf_multipath 00:19:30.754 ************************************ 00:19:30.754 12:47:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:30.754 * Looking for test storage... 00:19:30.754 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:30.754 12:47:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:30.754 12:47:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:30.754 12:47:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:31.016 12:47:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:31.016 12:47:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:31.016 12:47:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:31.016 12:47:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:31.016 12:47:03 -- scripts/common.sh@335 -- # IFS=.-: 00:19:31.016 12:47:03 -- scripts/common.sh@335 -- # read -ra ver1 00:19:31.016 12:47:03 -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.016 12:47:03 -- scripts/common.sh@336 -- # read -ra ver2 00:19:31.016 12:47:03 -- scripts/common.sh@337 -- # local 'op=<' 00:19:31.016 12:47:03 -- scripts/common.sh@339 -- # ver1_l=2 00:19:31.016 12:47:03 -- scripts/common.sh@340 -- # ver2_l=1 00:19:31.016 12:47:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:31.016 12:47:03 -- scripts/common.sh@343 -- # case "$op" in 00:19:31.016 12:47:03 -- scripts/common.sh@344 -- # : 1 00:19:31.016 12:47:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:31.016 12:47:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.016 12:47:03 -- scripts/common.sh@364 -- # decimal 1 00:19:31.016 12:47:03 -- scripts/common.sh@352 -- # local d=1 00:19:31.016 12:47:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.016 12:47:03 -- scripts/common.sh@354 -- # echo 1 00:19:31.016 12:47:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:31.016 12:47:03 -- scripts/common.sh@365 -- # decimal 2 00:19:31.016 12:47:03 -- scripts/common.sh@352 -- # local d=2 00:19:31.016 12:47:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.016 12:47:03 -- scripts/common.sh@354 -- # echo 2 00:19:31.016 12:47:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:31.016 12:47:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:31.016 12:47:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:31.016 12:47:03 -- scripts/common.sh@367 -- # return 0 00:19:31.016 12:47:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.016 12:47:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:31.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.016 --rc genhtml_branch_coverage=1 00:19:31.016 --rc genhtml_function_coverage=1 00:19:31.016 --rc genhtml_legend=1 00:19:31.016 --rc geninfo_all_blocks=1 00:19:31.016 --rc geninfo_unexecuted_blocks=1 00:19:31.016 00:19:31.016 ' 00:19:31.016 12:47:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:31.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.016 --rc genhtml_branch_coverage=1 00:19:31.016 --rc genhtml_function_coverage=1 00:19:31.016 --rc genhtml_legend=1 00:19:31.016 --rc geninfo_all_blocks=1 00:19:31.016 --rc geninfo_unexecuted_blocks=1 00:19:31.016 00:19:31.016 ' 00:19:31.016 12:47:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:31.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.016 --rc genhtml_branch_coverage=1 00:19:31.016 --rc genhtml_function_coverage=1 00:19:31.016 --rc genhtml_legend=1 00:19:31.016 --rc geninfo_all_blocks=1 00:19:31.016 --rc geninfo_unexecuted_blocks=1 00:19:31.016 00:19:31.016 ' 00:19:31.016 12:47:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:31.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.016 --rc genhtml_branch_coverage=1 00:19:31.016 --rc genhtml_function_coverage=1 00:19:31.016 --rc genhtml_legend=1 00:19:31.016 --rc geninfo_all_blocks=1 00:19:31.016 --rc geninfo_unexecuted_blocks=1 00:19:31.016 00:19:31.016 ' 00:19:31.016 12:47:03 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.016 12:47:03 -- nvmf/common.sh@7 -- # uname -s 00:19:31.016 12:47:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.016 12:47:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.016 12:47:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.016 12:47:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.016 12:47:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.016 12:47:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.016 12:47:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.016 12:47:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.016 12:47:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.016 12:47:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.016 12:47:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:31.016 12:47:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:31.016 12:47:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.016 12:47:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.016 12:47:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.016 12:47:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:31.016 12:47:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.016 12:47:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.016 12:47:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.016 12:47:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.016 12:47:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.016 12:47:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.016 12:47:03 -- paths/export.sh@5 -- # export PATH 00:19:31.017 12:47:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.017 12:47:03 -- nvmf/common.sh@46 -- # : 0 00:19:31.017 12:47:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:31.017 12:47:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:31.017 12:47:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:31.017 12:47:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.017 12:47:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.017 12:47:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:31.017 12:47:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:31.017 12:47:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:31.017 12:47:03 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:31.017 12:47:03 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:31.017 12:47:03 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:31.017 12:47:03 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:31.017 12:47:03 -- target/multipath.sh@43 -- # nvmftestinit 00:19:31.017 12:47:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:31.017 12:47:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.017 12:47:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:31.017 12:47:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:31.017 12:47:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:31.017 12:47:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.017 12:47:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.017 12:47:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.017 12:47:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:31.017 12:47:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:31.017 12:47:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:31.017 12:47:03 -- common/autotest_common.sh@10 -- # set +x 00:19:39.163 12:47:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:39.163 12:47:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:39.163 12:47:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:39.163 12:47:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:39.163 12:47:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:39.163 12:47:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:39.163 12:47:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:39.163 12:47:10 -- nvmf/common.sh@294 -- # net_devs=() 00:19:39.163 12:47:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:39.163 12:47:10 -- nvmf/common.sh@295 -- # e810=() 00:19:39.163 12:47:10 -- nvmf/common.sh@295 -- # local -ga e810 00:19:39.163 12:47:10 -- nvmf/common.sh@296 -- # x722=() 00:19:39.163 12:47:10 -- nvmf/common.sh@296 -- # local -ga x722 00:19:39.163 12:47:10 -- nvmf/common.sh@297 -- # mlx=() 00:19:39.163 12:47:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:39.163 12:47:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.163 12:47:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:39.163 12:47:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:39.163 12:47:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:39.163 12:47:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:39.163 12:47:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:39.163 12:47:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:39.163 12:47:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:39.163 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:39.163 12:47:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:39.163 12:47:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:39.163 12:47:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:39.163 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:39.163 12:47:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:39.163 12:47:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:39.163 12:47:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:39.163 12:47:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.163 12:47:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:39.163 12:47:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.163 12:47:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:39.163 Found net devices under 0000:98:00.0: mlx_0_0 00:19:39.163 12:47:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.163 12:47:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:39.163 12:47:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.163 12:47:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:39.163 12:47:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.163 12:47:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:39.163 Found net devices under 0000:98:00.1: mlx_0_1 00:19:39.163 12:47:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.163 12:47:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:39.163 12:47:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:39.163 12:47:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:39.163 12:47:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:39.163 12:47:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:39.163 12:47:10 -- nvmf/common.sh@57 -- # uname 00:19:39.163 12:47:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:39.164 12:47:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:39.164 12:47:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:39.164 12:47:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:39.164 12:47:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:39.164 12:47:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:39.164 12:47:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:39.164 12:47:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:39.164 12:47:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:39.164 12:47:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:39.164 12:47:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:39.164 12:47:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:39.164 12:47:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:39.164 12:47:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:39.164 12:47:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:39.164 12:47:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:39.164 12:47:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:39.164 12:47:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.164 12:47:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:39.164 12:47:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:39.164 12:47:11 -- nvmf/common.sh@104 -- # continue 2 00:19:39.164 12:47:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:39.164 12:47:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.164 12:47:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:39.164 12:47:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.164 12:47:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:39.164 12:47:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:39.164 12:47:11 -- nvmf/common.sh@104 -- # continue 2 00:19:39.164 12:47:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:39.164 12:47:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:39.164 12:47:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:39.164 12:47:11 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:39.164 12:47:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:39.164 12:47:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:39.164 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:39.164 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:19:39.164 altname enp152s0f0np0 00:19:39.164 altname ens817f0np0 00:19:39.164 inet 192.168.100.8/24 scope global mlx_0_0 00:19:39.164 valid_lft forever preferred_lft forever 00:19:39.164 12:47:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:39.164 12:47:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:39.164 12:47:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:39.164 12:47:11 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:39.164 12:47:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:39.164 12:47:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:39.164 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:39.164 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:19:39.164 altname enp152s0f1np1 00:19:39.164 altname ens817f1np1 00:19:39.164 inet 192.168.100.9/24 scope global mlx_0_1 00:19:39.164 valid_lft forever preferred_lft forever 00:19:39.164 12:47:11 -- nvmf/common.sh@410 -- # return 0 00:19:39.164 12:47:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:39.164 12:47:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:39.164 12:47:11 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:39.164 12:47:11 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:39.164 12:47:11 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:39.164 12:47:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:39.164 12:47:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:39.164 12:47:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:39.164 12:47:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:39.164 12:47:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:39.164 12:47:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:39.164 12:47:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.164 12:47:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:39.164 12:47:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:39.164 12:47:11 -- nvmf/common.sh@104 -- # continue 2 00:19:39.164 12:47:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:39.164 12:47:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.164 12:47:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:39.164 12:47:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.164 12:47:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:39.164 12:47:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:39.164 12:47:11 -- nvmf/common.sh@104 -- # continue 2 00:19:39.164 12:47:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:39.164 12:47:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:39.164 12:47:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:39.164 12:47:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:39.164 12:47:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:39.164 12:47:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:39.164 12:47:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:39.164 12:47:11 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:39.164 192.168.100.9' 00:19:39.164 12:47:11 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:39.164 192.168.100.9' 00:19:39.164 12:47:11 -- nvmf/common.sh@445 -- # head -n 1 00:19:39.164 12:47:11 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:39.164 12:47:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:39.164 192.168.100.9' 00:19:39.164 12:47:11 -- nvmf/common.sh@446 -- # tail -n +2 00:19:39.164 12:47:11 -- nvmf/common.sh@446 -- # head -n 1 00:19:39.164 12:47:11 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:39.164 12:47:11 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:39.164 12:47:11 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:39.164 12:47:11 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:39.164 12:47:11 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:39.164 12:47:11 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:39.164 12:47:11 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:19:39.164 12:47:11 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:19:39.164 12:47:11 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:19:39.164 run this test only with TCP transport for now 00:19:39.164 12:47:11 -- target/multipath.sh@53 -- # nvmftestfini 00:19:39.164 12:47:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:39.164 12:47:11 -- nvmf/common.sh@116 -- # sync 00:19:39.164 12:47:11 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:39.164 12:47:11 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:39.164 12:47:11 -- nvmf/common.sh@119 -- # set +e 00:19:39.164 12:47:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:39.164 12:47:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:39.164 rmmod nvme_rdma 00:19:39.164 rmmod nvme_fabrics 00:19:39.164 12:47:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:39.164 12:47:11 -- nvmf/common.sh@123 -- # set -e 00:19:39.164 12:47:11 -- nvmf/common.sh@124 -- # return 0 00:19:39.165 12:47:11 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:39.165 12:47:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:39.165 12:47:11 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:39.165 12:47:11 -- target/multipath.sh@54 -- # exit 0 00:19:39.165 12:47:11 -- target/multipath.sh@1 -- # nvmftestfini 00:19:39.165 12:47:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:39.165 12:47:11 -- nvmf/common.sh@116 -- # sync 00:19:39.165 12:47:11 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:39.165 12:47:11 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:39.165 12:47:11 -- nvmf/common.sh@119 -- # set +e 00:19:39.165 12:47:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:39.165 12:47:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:39.165 12:47:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:39.165 12:47:11 -- nvmf/common.sh@123 -- # set -e 00:19:39.165 12:47:11 -- nvmf/common.sh@124 -- # return 0 00:19:39.165 12:47:11 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:39.165 12:47:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:39.165 12:47:11 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:39.165 00:19:39.165 real 0m7.506s 00:19:39.165 user 0m2.186s 00:19:39.165 sys 0m5.400s 00:19:39.165 12:47:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:39.165 12:47:11 -- common/autotest_common.sh@10 -- # set +x 00:19:39.165 ************************************ 00:19:39.165 END TEST nvmf_multipath 00:19:39.165 ************************************ 00:19:39.165 12:47:11 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:39.165 12:47:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:39.165 12:47:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:39.165 12:47:11 -- common/autotest_common.sh@10 -- # set +x 00:19:39.165 ************************************ 00:19:39.165 START TEST nvmf_zcopy 00:19:39.165 ************************************ 00:19:39.165 12:47:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:39.165 * Looking for test storage... 00:19:39.165 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:39.165 12:47:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:39.165 12:47:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:39.165 12:47:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:39.165 12:47:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:39.165 12:47:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:39.165 12:47:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:39.165 12:47:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:39.165 12:47:11 -- scripts/common.sh@335 -- # IFS=.-: 00:19:39.165 12:47:11 -- scripts/common.sh@335 -- # read -ra ver1 00:19:39.165 12:47:11 -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.165 12:47:11 -- scripts/common.sh@336 -- # read -ra ver2 00:19:39.165 12:47:11 -- scripts/common.sh@337 -- # local 'op=<' 00:19:39.165 12:47:11 -- scripts/common.sh@339 -- # ver1_l=2 00:19:39.165 12:47:11 -- scripts/common.sh@340 -- # ver2_l=1 00:19:39.165 12:47:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:39.165 12:47:11 -- scripts/common.sh@343 -- # case "$op" in 00:19:39.165 12:47:11 -- scripts/common.sh@344 -- # : 1 00:19:39.165 12:47:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:39.165 12:47:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.165 12:47:11 -- scripts/common.sh@364 -- # decimal 1 00:19:39.165 12:47:11 -- scripts/common.sh@352 -- # local d=1 00:19:39.165 12:47:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.165 12:47:11 -- scripts/common.sh@354 -- # echo 1 00:19:39.165 12:47:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:39.165 12:47:11 -- scripts/common.sh@365 -- # decimal 2 00:19:39.165 12:47:11 -- scripts/common.sh@352 -- # local d=2 00:19:39.165 12:47:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.165 12:47:11 -- scripts/common.sh@354 -- # echo 2 00:19:39.165 12:47:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:39.165 12:47:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:39.165 12:47:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:39.165 12:47:11 -- scripts/common.sh@367 -- # return 0 00:19:39.165 12:47:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.165 12:47:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.165 --rc genhtml_branch_coverage=1 00:19:39.165 --rc genhtml_function_coverage=1 00:19:39.165 --rc genhtml_legend=1 00:19:39.165 --rc geninfo_all_blocks=1 00:19:39.165 --rc geninfo_unexecuted_blocks=1 00:19:39.165 00:19:39.165 ' 00:19:39.165 12:47:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.165 --rc genhtml_branch_coverage=1 00:19:39.165 --rc genhtml_function_coverage=1 00:19:39.165 --rc genhtml_legend=1 00:19:39.165 --rc geninfo_all_blocks=1 00:19:39.165 --rc geninfo_unexecuted_blocks=1 00:19:39.165 00:19:39.165 ' 00:19:39.165 12:47:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.165 --rc genhtml_branch_coverage=1 00:19:39.165 --rc genhtml_function_coverage=1 00:19:39.165 --rc genhtml_legend=1 00:19:39.165 --rc geninfo_all_blocks=1 00:19:39.165 --rc geninfo_unexecuted_blocks=1 00:19:39.165 00:19:39.165 ' 00:19:39.165 12:47:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.165 --rc genhtml_branch_coverage=1 00:19:39.165 --rc genhtml_function_coverage=1 00:19:39.165 --rc genhtml_legend=1 00:19:39.165 --rc geninfo_all_blocks=1 00:19:39.165 --rc geninfo_unexecuted_blocks=1 00:19:39.165 00:19:39.165 ' 00:19:39.165 12:47:11 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.165 12:47:11 -- nvmf/common.sh@7 -- # uname -s 00:19:39.165 12:47:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.165 12:47:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.165 12:47:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.165 12:47:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.165 12:47:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.165 12:47:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.165 12:47:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.165 12:47:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.165 12:47:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.165 12:47:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.165 12:47:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:39.165 12:47:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:39.165 12:47:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.165 12:47:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.165 12:47:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.165 12:47:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:39.165 12:47:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.165 12:47:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.165 12:47:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.166 12:47:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.166 12:47:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.166 12:47:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.166 12:47:11 -- paths/export.sh@5 -- # export PATH 00:19:39.166 12:47:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.166 12:47:11 -- nvmf/common.sh@46 -- # : 0 00:19:39.166 12:47:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:39.166 12:47:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:39.166 12:47:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:39.166 12:47:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.166 12:47:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.166 12:47:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:39.166 12:47:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:39.166 12:47:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:39.166 12:47:11 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:39.166 12:47:11 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:39.166 12:47:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.166 12:47:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:39.166 12:47:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:39.166 12:47:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:39.166 12:47:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.166 12:47:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.166 12:47:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.166 12:47:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:39.166 12:47:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:39.166 12:47:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:39.166 12:47:11 -- common/autotest_common.sh@10 -- # set +x 00:19:45.757 12:47:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:45.757 12:47:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:45.757 12:47:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:45.757 12:47:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:45.757 12:47:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:45.757 12:47:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:45.757 12:47:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:45.757 12:47:18 -- nvmf/common.sh@294 -- # net_devs=() 00:19:45.757 12:47:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:45.757 12:47:18 -- nvmf/common.sh@295 -- # e810=() 00:19:45.757 12:47:18 -- nvmf/common.sh@295 -- # local -ga e810 00:19:45.757 12:47:18 -- nvmf/common.sh@296 -- # x722=() 00:19:45.757 12:47:18 -- nvmf/common.sh@296 -- # local -ga x722 00:19:45.757 12:47:18 -- nvmf/common.sh@297 -- # mlx=() 00:19:45.757 12:47:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:45.757 12:47:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.757 12:47:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:45.757 12:47:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:45.757 12:47:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:45.757 12:47:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:45.757 12:47:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:45.757 12:47:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:45.757 12:47:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:45.757 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:45.757 12:47:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:45.757 12:47:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:45.757 12:47:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:45.757 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:45.757 12:47:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:45.757 12:47:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:45.757 12:47:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:45.757 12:47:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.757 12:47:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:45.757 12:47:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.757 12:47:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:45.757 Found net devices under 0000:98:00.0: mlx_0_0 00:19:45.757 12:47:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.757 12:47:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:45.757 12:47:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.757 12:47:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:45.757 12:47:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.757 12:47:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:45.757 Found net devices under 0000:98:00.1: mlx_0_1 00:19:45.757 12:47:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.757 12:47:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:45.757 12:47:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:45.757 12:47:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:45.757 12:47:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:45.757 12:47:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:45.757 12:47:18 -- nvmf/common.sh@57 -- # uname 00:19:45.757 12:47:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:45.757 12:47:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:45.757 12:47:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:45.758 12:47:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:45.758 12:47:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:45.758 12:47:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:45.758 12:47:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:45.758 12:47:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:45.758 12:47:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:45.758 12:47:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:45.758 12:47:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:45.758 12:47:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:45.758 12:47:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:45.758 12:47:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:45.758 12:47:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:45.758 12:47:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:45.758 12:47:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:45.758 12:47:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.758 12:47:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:45.758 12:47:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:45.758 12:47:18 -- nvmf/common.sh@104 -- # continue 2 00:19:45.758 12:47:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:45.758 12:47:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.758 12:47:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:45.758 12:47:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.758 12:47:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:45.758 12:47:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:45.758 12:47:18 -- nvmf/common.sh@104 -- # continue 2 00:19:45.758 12:47:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:45.758 12:47:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:45.758 12:47:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:45.758 12:47:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:45.758 12:47:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:45.758 12:47:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:45.758 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:45.758 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:19:45.758 altname enp152s0f0np0 00:19:45.758 altname ens817f0np0 00:19:45.758 inet 192.168.100.8/24 scope global mlx_0_0 00:19:45.758 valid_lft forever preferred_lft forever 00:19:45.758 12:47:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:45.758 12:47:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:45.758 12:47:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:45.758 12:47:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:45.758 12:47:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:45.758 12:47:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:45.758 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:45.758 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:19:45.758 altname enp152s0f1np1 00:19:45.758 altname ens817f1np1 00:19:45.758 inet 192.168.100.9/24 scope global mlx_0_1 00:19:45.758 valid_lft forever preferred_lft forever 00:19:45.758 12:47:18 -- nvmf/common.sh@410 -- # return 0 00:19:45.758 12:47:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:45.758 12:47:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:45.758 12:47:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:45.758 12:47:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:45.758 12:47:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:45.758 12:47:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:45.758 12:47:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:45.758 12:47:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:45.758 12:47:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:45.758 12:47:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:45.758 12:47:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:45.758 12:47:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.758 12:47:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:45.758 12:47:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:45.758 12:47:18 -- nvmf/common.sh@104 -- # continue 2 00:19:45.758 12:47:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:45.758 12:47:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.758 12:47:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:45.758 12:47:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.758 12:47:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:45.758 12:47:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:45.758 12:47:18 -- nvmf/common.sh@104 -- # continue 2 00:19:45.758 12:47:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:45.758 12:47:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:45.758 12:47:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:45.758 12:47:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:45.758 12:47:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:45.758 12:47:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:45.758 12:47:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:45.758 12:47:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:45.758 192.168.100.9' 00:19:45.758 12:47:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:45.758 192.168.100.9' 00:19:45.758 12:47:18 -- nvmf/common.sh@445 -- # head -n 1 00:19:45.758 12:47:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:45.758 12:47:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:45.758 192.168.100.9' 00:19:45.758 12:47:18 -- nvmf/common.sh@446 -- # tail -n +2 00:19:45.758 12:47:18 -- nvmf/common.sh@446 -- # head -n 1 00:19:45.758 12:47:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:45.758 12:47:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:45.758 12:47:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:45.758 12:47:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:45.758 12:47:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:45.758 12:47:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:45.758 12:47:18 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:45.758 12:47:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:45.758 12:47:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.758 12:47:18 -- common/autotest_common.sh@10 -- # set +x 00:19:45.758 12:47:18 -- nvmf/common.sh@469 -- # nvmfpid=539725 00:19:45.758 12:47:18 -- nvmf/common.sh@470 -- # waitforlisten 539725 00:19:45.758 12:47:18 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:45.758 12:47:18 -- common/autotest_common.sh@829 -- # '[' -z 539725 ']' 00:19:45.758 12:47:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.758 12:47:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.758 12:47:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.758 12:47:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.758 12:47:18 -- common/autotest_common.sh@10 -- # set +x 00:19:45.758 [2024-11-20 12:47:18.614228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:45.758 [2024-11-20 12:47:18.614292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.758 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.758 [2024-11-20 12:47:18.697823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.758 [2024-11-20 12:47:18.788562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:45.758 [2024-11-20 12:47:18.788708] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.758 [2024-11-20 12:47:18.788717] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.758 [2024-11-20 12:47:18.788726] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.758 [2024-11-20 12:47:18.788751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.331 12:47:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.331 12:47:19 -- common/autotest_common.sh@862 -- # return 0 00:19:46.331 12:47:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:46.331 12:47:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.331 12:47:19 -- common/autotest_common.sh@10 -- # set +x 00:19:46.592 12:47:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.592 12:47:19 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:46.592 12:47:19 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:46.592 Unsupported transport: rdma 00:19:46.592 12:47:19 -- target/zcopy.sh@17 -- # exit 0 00:19:46.592 12:47:19 -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:46.592 12:47:19 -- common/autotest_common.sh@806 -- # type=--id 00:19:46.592 12:47:19 -- common/autotest_common.sh@807 -- # id=0 00:19:46.592 12:47:19 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:46.592 12:47:19 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:46.592 12:47:19 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:46.592 12:47:19 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:46.592 12:47:19 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:46.592 12:47:19 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:46.592 nvmf_trace.0 00:19:46.592 12:47:19 -- common/autotest_common.sh@821 -- # return 0 00:19:46.592 12:47:19 -- target/zcopy.sh@1 -- # nvmftestfini 00:19:46.592 12:47:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:46.592 12:47:19 -- nvmf/common.sh@116 -- # sync 00:19:46.592 12:47:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:46.592 12:47:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:46.592 12:47:19 -- nvmf/common.sh@119 -- # set +e 00:19:46.592 12:47:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:46.592 12:47:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:46.592 rmmod nvme_rdma 00:19:46.592 rmmod nvme_fabrics 00:19:46.592 12:47:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:46.592 12:47:19 -- nvmf/common.sh@123 -- # set -e 00:19:46.592 12:47:19 -- nvmf/common.sh@124 -- # return 0 00:19:46.592 12:47:19 -- nvmf/common.sh@477 -- # '[' -n 539725 ']' 00:19:46.592 12:47:19 -- nvmf/common.sh@478 -- # killprocess 539725 00:19:46.592 12:47:19 -- common/autotest_common.sh@936 -- # '[' -z 539725 ']' 00:19:46.592 12:47:19 -- common/autotest_common.sh@940 -- # kill -0 539725 00:19:46.592 12:47:19 -- common/autotest_common.sh@941 -- # uname 00:19:46.592 12:47:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.592 12:47:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 539725 00:19:46.592 12:47:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:46.592 12:47:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:46.592 12:47:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 539725' 00:19:46.592 killing process with pid 539725 00:19:46.592 12:47:19 -- common/autotest_common.sh@955 -- # kill 539725 00:19:46.592 12:47:19 -- common/autotest_common.sh@960 -- # wait 539725 00:19:46.853 12:47:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:46.853 12:47:19 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:46.853 00:19:46.853 real 0m8.475s 00:19:46.853 user 0m3.400s 00:19:46.853 sys 0m5.720s 00:19:46.853 12:47:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:46.853 12:47:19 -- common/autotest_common.sh@10 -- # set +x 00:19:46.853 ************************************ 00:19:46.853 END TEST nvmf_zcopy 00:19:46.853 ************************************ 00:19:46.853 12:47:19 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:46.853 12:47:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:46.853 12:47:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:46.853 12:47:19 -- common/autotest_common.sh@10 -- # set +x 00:19:46.853 ************************************ 00:19:46.853 START TEST nvmf_nmic 00:19:46.853 ************************************ 00:19:46.853 12:47:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:46.853 * Looking for test storage... 00:19:46.853 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:46.853 12:47:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:46.853 12:47:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:46.853 12:47:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:47.115 12:47:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:47.115 12:47:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:47.115 12:47:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:47.115 12:47:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:47.115 12:47:20 -- scripts/common.sh@335 -- # IFS=.-: 00:19:47.115 12:47:20 -- scripts/common.sh@335 -- # read -ra ver1 00:19:47.115 12:47:20 -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.115 12:47:20 -- scripts/common.sh@336 -- # read -ra ver2 00:19:47.115 12:47:20 -- scripts/common.sh@337 -- # local 'op=<' 00:19:47.115 12:47:20 -- scripts/common.sh@339 -- # ver1_l=2 00:19:47.115 12:47:20 -- scripts/common.sh@340 -- # ver2_l=1 00:19:47.115 12:47:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:47.115 12:47:20 -- scripts/common.sh@343 -- # case "$op" in 00:19:47.115 12:47:20 -- scripts/common.sh@344 -- # : 1 00:19:47.115 12:47:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:47.115 12:47:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.115 12:47:20 -- scripts/common.sh@364 -- # decimal 1 00:19:47.115 12:47:20 -- scripts/common.sh@352 -- # local d=1 00:19:47.115 12:47:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.115 12:47:20 -- scripts/common.sh@354 -- # echo 1 00:19:47.115 12:47:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:47.115 12:47:20 -- scripts/common.sh@365 -- # decimal 2 00:19:47.115 12:47:20 -- scripts/common.sh@352 -- # local d=2 00:19:47.115 12:47:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.115 12:47:20 -- scripts/common.sh@354 -- # echo 2 00:19:47.115 12:47:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:47.115 12:47:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:47.115 12:47:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:47.115 12:47:20 -- scripts/common.sh@367 -- # return 0 00:19:47.115 12:47:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.115 12:47:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:47.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.115 --rc genhtml_branch_coverage=1 00:19:47.115 --rc genhtml_function_coverage=1 00:19:47.115 --rc genhtml_legend=1 00:19:47.115 --rc geninfo_all_blocks=1 00:19:47.115 --rc geninfo_unexecuted_blocks=1 00:19:47.115 00:19:47.115 ' 00:19:47.115 12:47:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:47.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.115 --rc genhtml_branch_coverage=1 00:19:47.115 --rc genhtml_function_coverage=1 00:19:47.115 --rc genhtml_legend=1 00:19:47.115 --rc geninfo_all_blocks=1 00:19:47.115 --rc geninfo_unexecuted_blocks=1 00:19:47.115 00:19:47.115 ' 00:19:47.115 12:47:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:47.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.115 --rc genhtml_branch_coverage=1 00:19:47.115 --rc genhtml_function_coverage=1 00:19:47.115 --rc genhtml_legend=1 00:19:47.115 --rc geninfo_all_blocks=1 00:19:47.115 --rc geninfo_unexecuted_blocks=1 00:19:47.115 00:19:47.115 ' 00:19:47.115 12:47:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:47.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.115 --rc genhtml_branch_coverage=1 00:19:47.115 --rc genhtml_function_coverage=1 00:19:47.115 --rc genhtml_legend=1 00:19:47.115 --rc geninfo_all_blocks=1 00:19:47.115 --rc geninfo_unexecuted_blocks=1 00:19:47.115 00:19:47.115 ' 00:19:47.115 12:47:20 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.115 12:47:20 -- nvmf/common.sh@7 -- # uname -s 00:19:47.115 12:47:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.115 12:47:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.115 12:47:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.115 12:47:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.115 12:47:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.115 12:47:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.115 12:47:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.115 12:47:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.115 12:47:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.115 12:47:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.115 12:47:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:47.115 12:47:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:47.115 12:47:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.115 12:47:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.115 12:47:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.115 12:47:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:47.115 12:47:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.115 12:47:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.115 12:47:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.115 12:47:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.115 12:47:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.115 12:47:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.115 12:47:20 -- paths/export.sh@5 -- # export PATH 00:19:47.115 12:47:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.115 12:47:20 -- nvmf/common.sh@46 -- # : 0 00:19:47.115 12:47:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.116 12:47:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.116 12:47:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.116 12:47:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.116 12:47:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.116 12:47:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.116 12:47:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.116 12:47:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.116 12:47:20 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:47.116 12:47:20 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:47.116 12:47:20 -- target/nmic.sh@14 -- # nvmftestinit 00:19:47.116 12:47:20 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:47.116 12:47:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.116 12:47:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.116 12:47:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.116 12:47:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.116 12:47:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.116 12:47:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.116 12:47:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.116 12:47:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:47.116 12:47:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:47.116 12:47:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:47.116 12:47:20 -- common/autotest_common.sh@10 -- # set +x 00:19:55.253 12:47:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:55.253 12:47:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:55.253 12:47:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:55.253 12:47:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:55.253 12:47:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:55.253 12:47:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:55.253 12:47:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:55.253 12:47:26 -- nvmf/common.sh@294 -- # net_devs=() 00:19:55.253 12:47:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:55.253 12:47:26 -- nvmf/common.sh@295 -- # e810=() 00:19:55.253 12:47:26 -- nvmf/common.sh@295 -- # local -ga e810 00:19:55.253 12:47:26 -- nvmf/common.sh@296 -- # x722=() 00:19:55.253 12:47:26 -- nvmf/common.sh@296 -- # local -ga x722 00:19:55.253 12:47:26 -- nvmf/common.sh@297 -- # mlx=() 00:19:55.253 12:47:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:55.253 12:47:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.253 12:47:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:55.253 12:47:26 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:55.253 12:47:26 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:55.253 12:47:26 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:55.253 12:47:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:55.253 12:47:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:55.253 12:47:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:55.253 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:55.253 12:47:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.253 12:47:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:55.253 12:47:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:55.253 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:55.253 12:47:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.253 12:47:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:55.253 12:47:26 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:55.253 12:47:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.253 12:47:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:55.253 12:47:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.253 12:47:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:55.253 Found net devices under 0000:98:00.0: mlx_0_0 00:19:55.253 12:47:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.253 12:47:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:55.253 12:47:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.253 12:47:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:55.253 12:47:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.253 12:47:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:55.253 Found net devices under 0000:98:00.1: mlx_0_1 00:19:55.253 12:47:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.253 12:47:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:55.253 12:47:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:55.253 12:47:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:55.253 12:47:26 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:55.253 12:47:26 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:55.253 12:47:26 -- nvmf/common.sh@57 -- # uname 00:19:55.253 12:47:26 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:55.253 12:47:26 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:55.253 12:47:26 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:55.253 12:47:26 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:55.253 12:47:26 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:55.253 12:47:26 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:55.253 12:47:26 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:55.253 12:47:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:55.253 12:47:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:55.253 12:47:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:55.253 12:47:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:55.253 12:47:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.253 12:47:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:55.253 12:47:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:55.253 12:47:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.253 12:47:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:55.253 12:47:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:55.253 12:47:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.253 12:47:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.253 12:47:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:55.253 12:47:27 -- nvmf/common.sh@104 -- # continue 2 00:19:55.253 12:47:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:55.253 12:47:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.253 12:47:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.253 12:47:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.253 12:47:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.254 12:47:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:55.254 12:47:27 -- nvmf/common.sh@104 -- # continue 2 00:19:55.254 12:47:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:55.254 12:47:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:55.254 12:47:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:55.254 12:47:27 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:55.254 12:47:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:55.254 12:47:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:55.254 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.254 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:19:55.254 altname enp152s0f0np0 00:19:55.254 altname ens817f0np0 00:19:55.254 inet 192.168.100.8/24 scope global mlx_0_0 00:19:55.254 valid_lft forever preferred_lft forever 00:19:55.254 12:47:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:55.254 12:47:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:55.254 12:47:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:55.254 12:47:27 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:55.254 12:47:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:55.254 12:47:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:55.254 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.254 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:19:55.254 altname enp152s0f1np1 00:19:55.254 altname ens817f1np1 00:19:55.254 inet 192.168.100.9/24 scope global mlx_0_1 00:19:55.254 valid_lft forever preferred_lft forever 00:19:55.254 12:47:27 -- nvmf/common.sh@410 -- # return 0 00:19:55.254 12:47:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:55.254 12:47:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:55.254 12:47:27 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:55.254 12:47:27 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:55.254 12:47:27 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:55.254 12:47:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.254 12:47:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:55.254 12:47:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:55.254 12:47:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.254 12:47:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:55.254 12:47:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:55.254 12:47:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.254 12:47:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.254 12:47:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:55.254 12:47:27 -- nvmf/common.sh@104 -- # continue 2 00:19:55.254 12:47:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:55.254 12:47:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.254 12:47:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.254 12:47:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.254 12:47:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.254 12:47:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:55.254 12:47:27 -- nvmf/common.sh@104 -- # continue 2 00:19:55.254 12:47:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:55.254 12:47:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:55.254 12:47:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:55.254 12:47:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:55.254 12:47:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:55.254 12:47:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:55.254 12:47:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:55.254 12:47:27 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:55.254 192.168.100.9' 00:19:55.254 12:47:27 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:55.254 192.168.100.9' 00:19:55.254 12:47:27 -- nvmf/common.sh@445 -- # head -n 1 00:19:55.254 12:47:27 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:55.254 12:47:27 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:55.254 192.168.100.9' 00:19:55.254 12:47:27 -- nvmf/common.sh@446 -- # tail -n +2 00:19:55.254 12:47:27 -- nvmf/common.sh@446 -- # head -n 1 00:19:55.254 12:47:27 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:55.254 12:47:27 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:55.254 12:47:27 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:55.254 12:47:27 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:55.254 12:47:27 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:55.254 12:47:27 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:55.254 12:47:27 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:55.254 12:47:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:55.254 12:47:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.254 12:47:27 -- common/autotest_common.sh@10 -- # set +x 00:19:55.254 12:47:27 -- nvmf/common.sh@469 -- # nvmfpid=543624 00:19:55.254 12:47:27 -- nvmf/common.sh@470 -- # waitforlisten 543624 00:19:55.254 12:47:27 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:55.254 12:47:27 -- common/autotest_common.sh@829 -- # '[' -z 543624 ']' 00:19:55.254 12:47:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.254 12:47:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.254 12:47:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.254 12:47:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.254 12:47:27 -- common/autotest_common.sh@10 -- # set +x 00:19:55.254 [2024-11-20 12:47:27.236783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:55.254 [2024-11-20 12:47:27.236853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.254 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.254 [2024-11-20 12:47:27.305063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.254 [2024-11-20 12:47:27.378976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:55.254 [2024-11-20 12:47:27.379126] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.254 [2024-11-20 12:47:27.379137] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.254 [2024-11-20 12:47:27.379145] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.254 [2024-11-20 12:47:27.379289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.254 [2024-11-20 12:47:27.379410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.254 [2024-11-20 12:47:27.379566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.254 [2024-11-20 12:47:27.379568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.254 12:47:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.254 12:47:28 -- common/autotest_common.sh@862 -- # return 0 00:19:55.254 12:47:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:55.254 12:47:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.254 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.254 12:47:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.254 12:47:28 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:55.254 12:47:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.254 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.254 [2024-11-20 12:47:28.105624] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x77a7f0/0x77ece0) succeed. 00:19:55.254 [2024-11-20 12:47:28.118797] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x77bde0/0x7c0380) succeed. 00:19:55.254 12:47:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.254 12:47:28 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:55.254 12:47:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.254 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.254 Malloc0 00:19:55.254 12:47:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.254 12:47:28 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:55.254 12:47:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.254 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.254 12:47:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.254 12:47:28 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.254 12:47:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.254 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.254 12:47:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.254 12:47:28 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:55.254 12:47:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.254 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.254 [2024-11-20 12:47:28.291118] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:55.254 12:47:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.254 12:47:28 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:55.254 test case1: single bdev can't be used in multiple subsystems 00:19:55.254 12:47:28 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:55.254 12:47:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.254 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 12:47:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.255 12:47:28 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:55.255 12:47:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.255 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 12:47:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.255 12:47:28 -- target/nmic.sh@28 -- # nmic_status=0 00:19:55.255 12:47:28 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:55.255 12:47:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.255 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 [2024-11-20 12:47:28.326899] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:55.255 [2024-11-20 12:47:28.326917] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:55.255 [2024-11-20 12:47:28.326925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:55.255 request: 00:19:55.255 { 00:19:55.255 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:55.255 "namespace": { 00:19:55.255 "bdev_name": "Malloc0" 00:19:55.255 }, 00:19:55.255 "method": "nvmf_subsystem_add_ns", 00:19:55.255 "req_id": 1 00:19:55.255 } 00:19:55.255 Got JSON-RPC error response 00:19:55.255 response: 00:19:55.255 { 00:19:55.255 "code": -32602, 00:19:55.255 "message": "Invalid parameters" 00:19:55.255 } 00:19:55.255 12:47:28 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:55.255 12:47:28 -- target/nmic.sh@29 -- # nmic_status=1 00:19:55.255 12:47:28 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:55.255 12:47:28 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:55.255 Adding namespace failed - expected result. 00:19:55.255 12:47:28 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:55.255 test case2: host connect to nvmf target in multiple paths 00:19:55.255 12:47:28 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:55.255 12:47:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.255 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 [2024-11-20 12:47:28.338964] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:55.255 12:47:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.255 12:47:28 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:57.171 12:47:29 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:58.556 12:47:31 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:58.556 12:47:31 -- common/autotest_common.sh@1187 -- # local i=0 00:19:58.556 12:47:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:58.556 12:47:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:58.556 12:47:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:00.468 12:47:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:00.468 12:47:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:00.468 12:47:33 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:00.468 12:47:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:00.468 12:47:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:00.468 12:47:33 -- common/autotest_common.sh@1197 -- # return 0 00:20:00.468 12:47:33 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:00.468 [global] 00:20:00.468 thread=1 00:20:00.468 invalidate=1 00:20:00.468 rw=write 00:20:00.468 time_based=1 00:20:00.468 runtime=1 00:20:00.468 ioengine=libaio 00:20:00.468 direct=1 00:20:00.468 bs=4096 00:20:00.468 iodepth=1 00:20:00.468 norandommap=0 00:20:00.468 numjobs=1 00:20:00.468 00:20:00.468 verify_dump=1 00:20:00.468 verify_backlog=512 00:20:00.468 verify_state_save=0 00:20:00.468 do_verify=1 00:20:00.468 verify=crc32c-intel 00:20:00.468 [job0] 00:20:00.468 filename=/dev/nvme0n1 00:20:00.468 Could not set queue depth (nvme0n1) 00:20:01.065 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:01.065 fio-3.35 00:20:01.065 Starting 1 thread 00:20:02.007 00:20:02.007 job0: (groupid=0, jobs=1): err= 0: pid=545088: Wed Nov 20 12:47:35 2024 00:20:02.007 read: IOPS=7822, BW=30.6MiB/s (32.0MB/s)(30.6MiB/1000msec) 00:20:02.007 slat (nsec): min=5720, max=27565, avg=6225.87, stdev=673.31 00:20:02.007 clat (usec): min=27, max=120, avg=52.55, stdev= 3.66 00:20:02.007 lat (usec): min=50, max=127, avg=58.78, stdev= 3.68 00:20:02.007 clat percentiles (usec): 00:20:02.007 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:20:02.007 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 52], 60.00th=[ 53], 00:20:02.007 | 70.00th=[ 55], 80.00th=[ 56], 90.00th=[ 58], 95.00th=[ 60], 00:20:02.007 | 99.00th=[ 62], 99.50th=[ 64], 99.90th=[ 68], 99.95th=[ 72], 00:20:02.007 | 99.99th=[ 121] 00:20:02.007 write: IOPS=8192, BW=32.0MiB/s (33.6MB/s)(32.0MiB/1000msec); 0 zone resets 00:20:02.007 slat (nsec): min=7888, max=53519, avg=8784.30, stdev=2609.36 00:20:02.007 clat (usec): min=32, max=358, avg=53.02, stdev=20.37 00:20:02.007 lat (usec): min=50, max=412, avg=61.80, stdev=22.42 00:20:02.007 clat percentiles (usec): 00:20:02.007 | 1.00th=[ 45], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 48], 00:20:02.007 | 30.00th=[ 49], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:20:02.007 | 70.00th=[ 53], 80.00th=[ 55], 90.00th=[ 57], 95.00th=[ 59], 00:20:02.007 | 99.00th=[ 165], 99.50th=[ 243], 99.90th=[ 306], 99.95th=[ 318], 00:20:02.007 | 99.99th=[ 359] 00:20:02.007 bw ( KiB/s): min=32768, max=32768, per=100.00%, avg=32768.00, stdev= 0.00, samples=1 00:20:02.007 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:20:02.007 lat (usec) : 50=36.01%, 100=63.39%, 250=0.39%, 500=0.21% 00:20:02.007 cpu : usr=9.60%, sys=16.70%, ctx=16014, majf=0, minf=1 00:20:02.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.007 issued rwts: total=7822,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:02.007 00:20:02.007 Run status group 0 (all jobs): 00:20:02.007 READ: bw=30.6MiB/s (32.0MB/s), 30.6MiB/s-30.6MiB/s (32.0MB/s-32.0MB/s), io=30.6MiB (32.0MB), run=1000-1000msec 00:20:02.007 WRITE: bw=32.0MiB/s (33.6MB/s), 32.0MiB/s-32.0MiB/s (33.6MB/s-33.6MB/s), io=32.0MiB (33.6MB), run=1000-1000msec 00:20:02.007 00:20:02.007 Disk stats (read/write): 00:20:02.007 nvme0n1: ios=7218/7190, merge=0/0, ticks=318/304, in_queue=622, util=90.68% 00:20:02.007 12:47:35 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:05.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:05.310 12:47:37 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:05.310 12:47:37 -- common/autotest_common.sh@1208 -- # local i=0 00:20:05.310 12:47:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:05.310 12:47:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:05.310 12:47:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:05.310 12:47:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:05.310 12:47:37 -- common/autotest_common.sh@1220 -- # return 0 00:20:05.310 12:47:37 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:05.310 12:47:37 -- target/nmic.sh@53 -- # nvmftestfini 00:20:05.310 12:47:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:05.310 12:47:37 -- nvmf/common.sh@116 -- # sync 00:20:05.310 12:47:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:05.310 12:47:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:05.310 12:47:37 -- nvmf/common.sh@119 -- # set +e 00:20:05.310 12:47:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:05.310 12:47:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:05.310 rmmod nvme_rdma 00:20:05.310 rmmod nvme_fabrics 00:20:05.310 12:47:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:05.310 12:47:37 -- nvmf/common.sh@123 -- # set -e 00:20:05.310 12:47:37 -- nvmf/common.sh@124 -- # return 0 00:20:05.310 12:47:37 -- nvmf/common.sh@477 -- # '[' -n 543624 ']' 00:20:05.310 12:47:37 -- nvmf/common.sh@478 -- # killprocess 543624 00:20:05.310 12:47:37 -- common/autotest_common.sh@936 -- # '[' -z 543624 ']' 00:20:05.310 12:47:37 -- common/autotest_common.sh@940 -- # kill -0 543624 00:20:05.310 12:47:37 -- common/autotest_common.sh@941 -- # uname 00:20:05.310 12:47:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.310 12:47:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 543624 00:20:05.310 12:47:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:05.310 12:47:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:05.310 12:47:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 543624' 00:20:05.310 killing process with pid 543624 00:20:05.310 12:47:37 -- common/autotest_common.sh@955 -- # kill 543624 00:20:05.310 12:47:37 -- common/autotest_common.sh@960 -- # wait 543624 00:20:05.310 12:47:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:05.310 12:47:38 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:05.310 00:20:05.310 real 0m18.227s 00:20:05.310 user 0m59.426s 00:20:05.310 sys 0m6.560s 00:20:05.310 12:47:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:05.310 12:47:38 -- common/autotest_common.sh@10 -- # set +x 00:20:05.310 ************************************ 00:20:05.310 END TEST nvmf_nmic 00:20:05.310 ************************************ 00:20:05.310 12:47:38 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:20:05.310 12:47:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:05.310 12:47:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:05.310 12:47:38 -- common/autotest_common.sh@10 -- # set +x 00:20:05.310 ************************************ 00:20:05.310 START TEST nvmf_fio_target 00:20:05.310 ************************************ 00:20:05.310 12:47:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:20:05.310 * Looking for test storage... 00:20:05.310 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:05.310 12:47:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:05.310 12:47:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:05.310 12:47:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:05.310 12:47:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:05.310 12:47:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:05.310 12:47:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:05.310 12:47:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:05.310 12:47:38 -- scripts/common.sh@335 -- # IFS=.-: 00:20:05.310 12:47:38 -- scripts/common.sh@335 -- # read -ra ver1 00:20:05.310 12:47:38 -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.310 12:47:38 -- scripts/common.sh@336 -- # read -ra ver2 00:20:05.310 12:47:38 -- scripts/common.sh@337 -- # local 'op=<' 00:20:05.310 12:47:38 -- scripts/common.sh@339 -- # ver1_l=2 00:20:05.310 12:47:38 -- scripts/common.sh@340 -- # ver2_l=1 00:20:05.310 12:47:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:05.310 12:47:38 -- scripts/common.sh@343 -- # case "$op" in 00:20:05.310 12:47:38 -- scripts/common.sh@344 -- # : 1 00:20:05.310 12:47:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:05.310 12:47:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.310 12:47:38 -- scripts/common.sh@364 -- # decimal 1 00:20:05.310 12:47:38 -- scripts/common.sh@352 -- # local d=1 00:20:05.310 12:47:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.310 12:47:38 -- scripts/common.sh@354 -- # echo 1 00:20:05.310 12:47:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:05.310 12:47:38 -- scripts/common.sh@365 -- # decimal 2 00:20:05.310 12:47:38 -- scripts/common.sh@352 -- # local d=2 00:20:05.310 12:47:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.310 12:47:38 -- scripts/common.sh@354 -- # echo 2 00:20:05.310 12:47:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:05.310 12:47:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:05.310 12:47:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:05.310 12:47:38 -- scripts/common.sh@367 -- # return 0 00:20:05.310 12:47:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.310 12:47:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.310 --rc genhtml_branch_coverage=1 00:20:05.310 --rc genhtml_function_coverage=1 00:20:05.310 --rc genhtml_legend=1 00:20:05.310 --rc geninfo_all_blocks=1 00:20:05.310 --rc geninfo_unexecuted_blocks=1 00:20:05.310 00:20:05.310 ' 00:20:05.310 12:47:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.310 --rc genhtml_branch_coverage=1 00:20:05.310 --rc genhtml_function_coverage=1 00:20:05.310 --rc genhtml_legend=1 00:20:05.310 --rc geninfo_all_blocks=1 00:20:05.310 --rc geninfo_unexecuted_blocks=1 00:20:05.310 00:20:05.310 ' 00:20:05.310 12:47:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.310 --rc genhtml_branch_coverage=1 00:20:05.310 --rc genhtml_function_coverage=1 00:20:05.310 --rc genhtml_legend=1 00:20:05.310 --rc geninfo_all_blocks=1 00:20:05.310 --rc geninfo_unexecuted_blocks=1 00:20:05.310 00:20:05.310 ' 00:20:05.310 12:47:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.310 --rc genhtml_branch_coverage=1 00:20:05.310 --rc genhtml_function_coverage=1 00:20:05.310 --rc genhtml_legend=1 00:20:05.310 --rc geninfo_all_blocks=1 00:20:05.310 --rc geninfo_unexecuted_blocks=1 00:20:05.310 00:20:05.310 ' 00:20:05.310 12:47:38 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.310 12:47:38 -- nvmf/common.sh@7 -- # uname -s 00:20:05.310 12:47:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.310 12:47:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.310 12:47:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.310 12:47:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.310 12:47:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.310 12:47:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.310 12:47:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.310 12:47:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.310 12:47:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.311 12:47:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.311 12:47:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:05.311 12:47:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:05.311 12:47:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.311 12:47:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.311 12:47:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.311 12:47:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:05.311 12:47:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.311 12:47:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.311 12:47:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.311 12:47:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.311 12:47:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.311 12:47:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.311 12:47:38 -- paths/export.sh@5 -- # export PATH 00:20:05.311 12:47:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.311 12:47:38 -- nvmf/common.sh@46 -- # : 0 00:20:05.311 12:47:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:05.311 12:47:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:05.311 12:47:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:05.311 12:47:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.311 12:47:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.311 12:47:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:05.311 12:47:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:05.311 12:47:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:05.311 12:47:38 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:05.311 12:47:38 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:05.311 12:47:38 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:05.311 12:47:38 -- target/fio.sh@16 -- # nvmftestinit 00:20:05.311 12:47:38 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:05.311 12:47:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.311 12:47:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:05.311 12:47:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:05.311 12:47:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:05.311 12:47:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.311 12:47:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.311 12:47:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.311 12:47:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:05.311 12:47:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:05.311 12:47:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:05.311 12:47:38 -- common/autotest_common.sh@10 -- # set +x 00:20:13.460 12:47:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:13.460 12:47:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:13.460 12:47:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:13.460 12:47:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:13.460 12:47:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:13.460 12:47:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:13.460 12:47:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:13.460 12:47:45 -- nvmf/common.sh@294 -- # net_devs=() 00:20:13.460 12:47:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:13.460 12:47:45 -- nvmf/common.sh@295 -- # e810=() 00:20:13.460 12:47:45 -- nvmf/common.sh@295 -- # local -ga e810 00:20:13.460 12:47:45 -- nvmf/common.sh@296 -- # x722=() 00:20:13.460 12:47:45 -- nvmf/common.sh@296 -- # local -ga x722 00:20:13.460 12:47:45 -- nvmf/common.sh@297 -- # mlx=() 00:20:13.460 12:47:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:13.460 12:47:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.460 12:47:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:13.460 12:47:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:13.460 12:47:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:13.460 12:47:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:13.460 12:47:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:13.460 12:47:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:13.460 12:47:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:13.460 12:47:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:13.460 12:47:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:20:13.460 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:20:13.460 12:47:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:13.461 12:47:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:20:13.461 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:20:13.461 12:47:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:13.461 12:47:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:13.461 12:47:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.461 12:47:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:13.461 12:47:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.461 12:47:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:20:13.461 Found net devices under 0000:98:00.0: mlx_0_0 00:20:13.461 12:47:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.461 12:47:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.461 12:47:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:13.461 12:47:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.461 12:47:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:20:13.461 Found net devices under 0000:98:00.1: mlx_0_1 00:20:13.461 12:47:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.461 12:47:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:13.461 12:47:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:13.461 12:47:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:13.461 12:47:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:13.461 12:47:45 -- nvmf/common.sh@57 -- # uname 00:20:13.461 12:47:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:13.461 12:47:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:13.461 12:47:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:13.461 12:47:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:13.461 12:47:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:13.461 12:47:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:13.461 12:47:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:13.461 12:47:45 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:13.461 12:47:45 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:13.461 12:47:45 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:13.461 12:47:45 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:13.461 12:47:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:13.461 12:47:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:13.461 12:47:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:13.461 12:47:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:13.461 12:47:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:13.461 12:47:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:13.461 12:47:45 -- nvmf/common.sh@104 -- # continue 2 00:20:13.461 12:47:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:13.461 12:47:45 -- nvmf/common.sh@104 -- # continue 2 00:20:13.461 12:47:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:13.461 12:47:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:13.461 12:47:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:13.461 12:47:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:13.461 12:47:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:13.461 12:47:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:13.461 12:47:45 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:13.461 12:47:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:13.461 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:13.461 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:20:13.461 altname enp152s0f0np0 00:20:13.461 altname ens817f0np0 00:20:13.461 inet 192.168.100.8/24 scope global mlx_0_0 00:20:13.461 valid_lft forever preferred_lft forever 00:20:13.461 12:47:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:13.461 12:47:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:13.461 12:47:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:13.461 12:47:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:13.461 12:47:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:13.461 12:47:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:13.461 12:47:45 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:13.461 12:47:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:13.461 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:13.461 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:20:13.461 altname enp152s0f1np1 00:20:13.461 altname ens817f1np1 00:20:13.461 inet 192.168.100.9/24 scope global mlx_0_1 00:20:13.461 valid_lft forever preferred_lft forever 00:20:13.461 12:47:45 -- nvmf/common.sh@410 -- # return 0 00:20:13.461 12:47:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:13.461 12:47:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:13.461 12:47:45 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:13.461 12:47:45 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:13.461 12:47:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:13.461 12:47:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:13.461 12:47:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:13.461 12:47:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:13.461 12:47:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:13.461 12:47:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:13.461 12:47:45 -- nvmf/common.sh@104 -- # continue 2 00:20:13.461 12:47:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.461 12:47:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:13.461 12:47:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:13.461 12:47:45 -- nvmf/common.sh@104 -- # continue 2 00:20:13.461 12:47:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:13.461 12:47:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:13.461 12:47:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:13.462 12:47:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:13.462 12:47:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:13.462 12:47:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:13.462 12:47:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:13.462 12:47:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:13.462 12:47:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:13.462 12:47:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:13.462 12:47:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:13.462 12:47:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:13.462 12:47:45 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:13.462 192.168.100.9' 00:20:13.462 12:47:45 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:13.462 192.168.100.9' 00:20:13.462 12:47:45 -- nvmf/common.sh@445 -- # head -n 1 00:20:13.462 12:47:45 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:13.462 12:47:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:13.462 192.168.100.9' 00:20:13.462 12:47:45 -- nvmf/common.sh@446 -- # tail -n +2 00:20:13.462 12:47:45 -- nvmf/common.sh@446 -- # head -n 1 00:20:13.462 12:47:45 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:13.462 12:47:45 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:13.462 12:47:45 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:13.462 12:47:45 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:13.462 12:47:45 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:13.462 12:47:45 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:13.462 12:47:45 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:13.462 12:47:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:13.462 12:47:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.462 12:47:45 -- common/autotest_common.sh@10 -- # set +x 00:20:13.462 12:47:45 -- nvmf/common.sh@469 -- # nvmfpid=549785 00:20:13.462 12:47:45 -- nvmf/common.sh@470 -- # waitforlisten 549785 00:20:13.462 12:47:45 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:13.462 12:47:45 -- common/autotest_common.sh@829 -- # '[' -z 549785 ']' 00:20:13.462 12:47:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.462 12:47:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.462 12:47:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.462 12:47:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.462 12:47:45 -- common/autotest_common.sh@10 -- # set +x 00:20:13.462 [2024-11-20 12:47:45.457543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:13.462 [2024-11-20 12:47:45.457612] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.462 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.462 [2024-11-20 12:47:45.523219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.462 [2024-11-20 12:47:45.594753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:13.462 [2024-11-20 12:47:45.594886] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.462 [2024-11-20 12:47:45.594896] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.462 [2024-11-20 12:47:45.594905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.462 [2024-11-20 12:47:45.595016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.462 [2024-11-20 12:47:45.595113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.462 [2024-11-20 12:47:45.595248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.462 [2024-11-20 12:47:45.595250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.462 12:47:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.462 12:47:46 -- common/autotest_common.sh@862 -- # return 0 00:20:13.462 12:47:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:13.462 12:47:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.462 12:47:46 -- common/autotest_common.sh@10 -- # set +x 00:20:13.462 12:47:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.462 12:47:46 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:13.462 [2024-11-20 12:47:46.466889] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10887f0/0x108cce0) succeed. 00:20:13.462 [2024-11-20 12:47:46.481584] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1089de0/0x10ce380) succeed. 00:20:13.723 12:47:46 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:13.723 12:47:46 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:13.723 12:47:46 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:13.984 12:47:46 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:13.984 12:47:46 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.245 12:47:47 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:14.245 12:47:47 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.506 12:47:47 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:14.506 12:47:47 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:14.506 12:47:47 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.767 12:47:47 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:14.767 12:47:47 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:15.028 12:47:47 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:15.028 12:47:47 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:15.028 12:47:48 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:15.028 12:47:48 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:15.287 12:47:48 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:15.548 12:47:48 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:15.548 12:47:48 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:15.548 12:47:48 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:15.548 12:47:48 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:15.809 12:47:48 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:16.070 [2024-11-20 12:47:48.933812] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:16.070 12:47:48 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:16.070 12:47:49 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:16.330 12:47:49 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:17.714 12:47:50 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:17.714 12:47:50 -- common/autotest_common.sh@1187 -- # local i=0 00:20:17.714 12:47:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:17.714 12:47:50 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:20:17.714 12:47:50 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:20:17.714 12:47:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:19.627 12:47:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:19.627 12:47:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:19.627 12:47:52 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:19.627 12:47:52 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:20:19.627 12:47:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:19.627 12:47:52 -- common/autotest_common.sh@1197 -- # return 0 00:20:19.627 12:47:52 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:19.887 [global] 00:20:19.887 thread=1 00:20:19.887 invalidate=1 00:20:19.887 rw=write 00:20:19.887 time_based=1 00:20:19.887 runtime=1 00:20:19.887 ioengine=libaio 00:20:19.887 direct=1 00:20:19.887 bs=4096 00:20:19.887 iodepth=1 00:20:19.887 norandommap=0 00:20:19.887 numjobs=1 00:20:19.887 00:20:19.887 verify_dump=1 00:20:19.887 verify_backlog=512 00:20:19.887 verify_state_save=0 00:20:19.887 do_verify=1 00:20:19.887 verify=crc32c-intel 00:20:19.887 [job0] 00:20:19.887 filename=/dev/nvme0n1 00:20:19.887 [job1] 00:20:19.887 filename=/dev/nvme0n2 00:20:19.887 [job2] 00:20:19.887 filename=/dev/nvme0n3 00:20:19.887 [job3] 00:20:19.887 filename=/dev/nvme0n4 00:20:19.887 Could not set queue depth (nvme0n1) 00:20:19.887 Could not set queue depth (nvme0n2) 00:20:19.887 Could not set queue depth (nvme0n3) 00:20:19.887 Could not set queue depth (nvme0n4) 00:20:20.148 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:20.148 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:20.148 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:20.148 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:20.148 fio-3.35 00:20:20.148 Starting 4 threads 00:20:21.534 00:20:21.534 job0: (groupid=0, jobs=1): err= 0: pid=551404: Wed Nov 20 12:47:54 2024 00:20:21.534 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:20:21.534 slat (nsec): min=5625, max=48293, avg=8884.45, stdev=7267.69 00:20:21.534 clat (usec): min=46, max=346, avg=85.97, stdev=51.11 00:20:21.534 lat (usec): min=52, max=375, avg=94.85, stdev=56.89 00:20:21.534 clat percentiles (usec): 00:20:21.534 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 55], 00:20:21.534 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 73], 60.00th=[ 78], 00:20:21.534 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 141], 95.00th=[ 229], 00:20:21.534 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 302], 99.95th=[ 306], 00:20:21.534 | 99.99th=[ 347] 00:20:21.534 write: IOPS=4544, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1001msec); 0 zone resets 00:20:21.534 slat (nsec): min=8027, max=55776, avg=14670.71, stdev=10683.63 00:20:21.534 clat (usec): min=40, max=597, avg=113.73, stdev=86.18 00:20:21.534 lat (usec): min=53, max=607, avg=128.40, stdev=94.14 00:20:21.534 clat percentiles (usec): 00:20:21.534 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 55], 00:20:21.534 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 75], 60.00th=[ 80], 00:20:21.534 | 70.00th=[ 90], 80.00th=[ 215], 90.00th=[ 262], 95.00th=[ 285], 00:20:21.534 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 437], 99.95th=[ 457], 00:20:21.534 | 99.99th=[ 594] 00:20:21.534 bw ( KiB/s): min=16712, max=16712, per=22.26%, avg=16712.00, stdev= 0.00, samples=1 00:20:21.534 iops : min= 4178, max= 4178, avg=4178.00, stdev= 0.00, samples=1 00:20:21.534 lat (usec) : 50=5.30%, 100=73.72%, 250=12.90%, 500=8.07%, 750=0.01% 00:20:21.534 cpu : usr=7.00%, sys=14.30%, ctx=8645, majf=0, minf=1 00:20:21.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.534 issued rwts: total=4096,4549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.534 job1: (groupid=0, jobs=1): err= 0: pid=551405: Wed Nov 20 12:47:54 2024 00:20:21.534 read: IOPS=3889, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1001msec) 00:20:21.534 slat (nsec): min=5785, max=49984, avg=11088.49, stdev=9440.90 00:20:21.534 clat (usec): min=44, max=430, avg=100.49, stdev=77.06 00:20:21.534 lat (usec): min=51, max=436, avg=111.58, stdev=83.69 00:20:21.534 clat percentiles (usec): 00:20:21.534 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:20:21.534 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 66], 00:20:21.534 | 70.00th=[ 82], 80.00th=[ 192], 90.00th=[ 235], 95.00th=[ 269], 00:20:21.534 | 99.00th=[ 330], 99.50th=[ 363], 99.90th=[ 408], 99.95th=[ 424], 00:20:21.534 | 99.99th=[ 433] 00:20:21.534 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:21.534 slat (nsec): min=8042, max=72463, avg=15658.09, stdev=11578.33 00:20:21.534 clat (usec): min=31, max=445, avg=115.35, stdev=93.25 00:20:21.534 lat (usec): min=52, max=465, avg=131.01, stdev=101.32 00:20:21.534 clat percentiles (usec): 00:20:21.534 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 51], 00:20:21.534 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 65], 00:20:21.534 | 70.00th=[ 133], 80.00th=[ 227], 90.00th=[ 265], 95.00th=[ 297], 00:20:21.534 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 416], 99.95th=[ 433], 00:20:21.534 | 99.99th=[ 445] 00:20:21.534 bw ( KiB/s): min= 9608, max= 9608, per=12.79%, avg=9608.00, stdev= 0.00, samples=1 00:20:21.534 iops : min= 2402, max= 2402, avg=2402.00, stdev= 0.00, samples=1 00:20:21.534 lat (usec) : 50=10.05%, 100=61.21%, 250=18.26%, 500=10.48% 00:20:21.534 cpu : usr=8.00%, sys=14.40%, ctx=7990, majf=0, minf=1 00:20:21.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.534 issued rwts: total=3893,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.534 job2: (groupid=0, jobs=1): err= 0: pid=551406: Wed Nov 20 12:47:54 2024 00:20:21.534 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:20:21.534 slat (nsec): min=5117, max=50034, avg=9780.88, stdev=7465.58 00:20:21.534 clat (usec): min=51, max=436, avg=98.81, stdev=58.32 00:20:21.534 lat (usec): min=57, max=442, avg=108.60, stdev=63.43 00:20:21.534 clat percentiles (usec): 00:20:21.534 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 64], 00:20:21.534 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 81], 60.00th=[ 85], 00:20:21.534 | 70.00th=[ 89], 80.00th=[ 98], 90.00th=[ 196], 95.00th=[ 243], 00:20:21.534 | 99.00th=[ 314], 99.50th=[ 347], 99.90th=[ 379], 99.95th=[ 388], 00:20:21.534 | 99.99th=[ 437] 00:20:21.534 write: IOPS=4103, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:21.534 slat (nsec): min=7246, max=60369, avg=14718.25, stdev=9830.83 00:20:21.534 clat (usec): min=50, max=440, avg=113.72, stdev=77.57 00:20:21.534 lat (usec): min=59, max=460, avg=128.44, stdev=83.99 00:20:21.534 clat percentiles (usec): 00:20:21.534 | 1.00th=[ 54], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 65], 00:20:21.534 | 30.00th=[ 73], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 84], 00:20:21.534 | 70.00th=[ 91], 80.00th=[ 194], 90.00th=[ 249], 95.00th=[ 281], 00:20:21.534 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 429], 99.95th=[ 433], 00:20:21.534 | 99.99th=[ 441] 00:20:21.534 bw ( KiB/s): min=16720, max=16720, per=22.27%, avg=16720.00, stdev= 0.00, samples=1 00:20:21.535 iops : min= 4180, max= 4180, avg=4180.00, stdev= 0.00, samples=1 00:20:21.535 lat (usec) : 100=77.99%, 250=15.13%, 500=6.89% 00:20:21.535 cpu : usr=6.80%, sys=14.00%, ctx=8204, majf=0, minf=1 00:20:21.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.535 issued rwts: total=4096,4108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.535 job3: (groupid=0, jobs=1): err= 0: pid=551407: Wed Nov 20 12:47:54 2024 00:20:21.535 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:20:21.535 slat (nsec): min=5578, max=32550, avg=8099.24, stdev=2931.76 00:20:21.535 clat (usec): min=46, max=309, avg=75.53, stdev=14.04 00:20:21.535 lat (usec): min=59, max=341, avg=83.63, stdev=15.17 00:20:21.535 clat percentiles (usec): 00:20:21.535 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 64], 00:20:21.535 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 76], 60.00th=[ 78], 00:20:21.535 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 91], 95.00th=[ 98], 00:20:21.535 | 99.00th=[ 114], 99.50th=[ 125], 99.90th=[ 163], 99.95th=[ 281], 00:20:21.535 | 99.99th=[ 310] 00:20:21.535 write: IOPS=6032, BW=23.6MiB/s (24.7MB/s)(23.6MiB/1001msec); 0 zone resets 00:20:21.535 slat (nsec): min=7457, max=74851, avg=10470.36, stdev=2619.32 00:20:21.535 clat (usec): min=37, max=135, avg=72.05, stdev=12.04 00:20:21.535 lat (usec): min=59, max=170, avg=82.52, stdev=13.09 00:20:21.535 clat percentiles (usec): 00:20:21.535 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 61], 00:20:21.535 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 76], 00:20:21.535 | 70.00th=[ 78], 80.00th=[ 82], 90.00th=[ 88], 95.00th=[ 93], 00:20:21.535 | 99.00th=[ 108], 99.50th=[ 115], 99.90th=[ 128], 99.95th=[ 131], 00:20:21.535 | 99.99th=[ 137] 00:20:21.535 bw ( KiB/s): min=24576, max=24576, per=32.73%, avg=24576.00, stdev= 0.00, samples=1 00:20:21.535 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:20:21.535 lat (usec) : 50=0.03%, 100=97.00%, 250=2.92%, 500=0.04% 00:20:21.535 cpu : usr=8.30%, sys=16.50%, ctx=11673, majf=0, minf=1 00:20:21.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.535 issued rwts: total=5632,6039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.535 00:20:21.535 Run status group 0 (all jobs): 00:20:21.535 READ: bw=69.1MiB/s (72.5MB/s), 15.2MiB/s-22.0MiB/s (15.9MB/s-23.0MB/s), io=69.2MiB (72.6MB), run=1001-1001msec 00:20:21.535 WRITE: bw=73.3MiB/s (76.9MB/s), 16.0MiB/s-23.6MiB/s (16.8MB/s-24.7MB/s), io=73.4MiB (77.0MB), run=1001-1001msec 00:20:21.535 00:20:21.535 Disk stats (read/write): 00:20:21.535 nvme0n1: ios=3634/3852, merge=0/0, ticks=241/306, in_queue=547, util=86.17% 00:20:21.535 nvme0n2: ios=3072/3336, merge=0/0, ticks=204/256, in_queue=460, util=86.27% 00:20:21.535 nvme0n3: ios=3584/3762, merge=0/0, ticks=263/287, in_queue=550, util=88.80% 00:20:21.535 nvme0n4: ios=4725/5120, merge=0/0, ticks=326/317, in_queue=643, util=89.55% 00:20:21.535 12:47:54 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:21.535 [global] 00:20:21.535 thread=1 00:20:21.535 invalidate=1 00:20:21.535 rw=randwrite 00:20:21.535 time_based=1 00:20:21.535 runtime=1 00:20:21.535 ioengine=libaio 00:20:21.535 direct=1 00:20:21.535 bs=4096 00:20:21.535 iodepth=1 00:20:21.535 norandommap=0 00:20:21.535 numjobs=1 00:20:21.535 00:20:21.535 verify_dump=1 00:20:21.535 verify_backlog=512 00:20:21.535 verify_state_save=0 00:20:21.535 do_verify=1 00:20:21.535 verify=crc32c-intel 00:20:21.535 [job0] 00:20:21.535 filename=/dev/nvme0n1 00:20:21.535 [job1] 00:20:21.535 filename=/dev/nvme0n2 00:20:21.535 [job2] 00:20:21.535 filename=/dev/nvme0n3 00:20:21.535 [job3] 00:20:21.535 filename=/dev/nvme0n4 00:20:21.535 Could not set queue depth (nvme0n1) 00:20:21.535 Could not set queue depth (nvme0n2) 00:20:21.535 Could not set queue depth (nvme0n3) 00:20:21.535 Could not set queue depth (nvme0n4) 00:20:21.795 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.795 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.795 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.795 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.795 fio-3.35 00:20:21.795 Starting 4 threads 00:20:23.202 00:20:23.202 job0: (groupid=0, jobs=1): err= 0: pid=551925: Wed Nov 20 12:47:55 2024 00:20:23.202 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:20:23.202 slat (nsec): min=5676, max=53912, avg=9111.02, stdev=7141.83 00:20:23.202 clat (usec): min=42, max=416, avg=96.79, stdev=55.93 00:20:23.202 lat (usec): min=51, max=424, avg=105.90, stdev=60.50 00:20:23.202 clat percentiles (usec): 00:20:23.202 | 1.00th=[ 49], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 67], 00:20:23.202 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 79], 60.00th=[ 87], 00:20:23.202 | 70.00th=[ 93], 80.00th=[ 101], 90.00th=[ 192], 95.00th=[ 239], 00:20:23.202 | 99.00th=[ 297], 99.50th=[ 343], 99.90th=[ 396], 99.95th=[ 400], 00:20:23.202 | 99.99th=[ 416] 00:20:23.202 write: IOPS=4882, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1001msec); 0 zone resets 00:20:23.202 slat (nsec): min=7622, max=47421, avg=10818.76, stdev=6548.52 00:20:23.202 clat (usec): min=38, max=416, avg=88.61, stdev=47.73 00:20:23.202 lat (usec): min=52, max=449, avg=99.43, stdev=52.16 00:20:23.202 clat percentiles (usec): 00:20:23.202 | 1.00th=[ 47], 5.00th=[ 51], 10.00th=[ 55], 20.00th=[ 65], 00:20:23.202 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 81], 00:20:23.202 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 118], 95.00th=[ 221], 00:20:23.202 | 99.00th=[ 269], 99.50th=[ 302], 99.90th=[ 355], 99.95th=[ 383], 00:20:23.202 | 99.99th=[ 416] 00:20:23.202 bw ( KiB/s): min=20480, max=20480, per=29.84%, avg=20480.00, stdev= 0.00, samples=1 00:20:23.202 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:23.202 lat (usec) : 50=3.80%, 100=78.28%, 250=14.80%, 500=3.12% 00:20:23.202 cpu : usr=7.10%, sys=13.00%, ctx=9495, majf=0, minf=1 00:20:23.202 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:23.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.202 issued rwts: total=4608,4887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.202 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:23.202 job1: (groupid=0, jobs=1): err= 0: pid=551927: Wed Nov 20 12:47:55 2024 00:20:23.202 read: IOPS=3557, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1001msec) 00:20:23.202 slat (nsec): min=5471, max=57913, avg=11485.14, stdev=8914.84 00:20:23.202 clat (usec): min=46, max=434, avg=119.38, stdev=73.19 00:20:23.202 lat (usec): min=52, max=465, avg=130.86, stdev=79.33 00:20:23.202 clat percentiles (usec): 00:20:23.202 | 1.00th=[ 51], 5.00th=[ 60], 10.00th=[ 67], 20.00th=[ 73], 00:20:23.202 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 96], 00:20:23.202 | 70.00th=[ 104], 80.00th=[ 159], 90.00th=[ 249], 95.00th=[ 277], 00:20:23.202 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 416], 99.95th=[ 429], 00:20:23.202 | 99.99th=[ 437] 00:20:23.202 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:20:23.202 slat (nsec): min=7596, max=68768, avg=14942.13, stdev=10163.01 00:20:23.202 clat (usec): min=34, max=450, avg=126.73, stdev=82.82 00:20:23.202 lat (usec): min=53, max=482, avg=141.67, stdev=89.88 00:20:23.202 clat percentiles (usec): 00:20:23.202 | 1.00th=[ 50], 5.00th=[ 58], 10.00th=[ 64], 20.00th=[ 71], 00:20:23.202 | 30.00th=[ 78], 40.00th=[ 85], 50.00th=[ 90], 60.00th=[ 95], 00:20:23.202 | 70.00th=[ 106], 80.00th=[ 223], 90.00th=[ 265], 95.00th=[ 297], 00:20:23.202 | 99.00th=[ 367], 99.50th=[ 388], 99.90th=[ 424], 99.95th=[ 437], 00:20:23.202 | 99.99th=[ 453] 00:20:23.202 bw ( KiB/s): min=10808, max=10808, per=15.75%, avg=10808.00, stdev= 0.00, samples=1 00:20:23.202 iops : min= 2702, max= 2702, avg=2702.00, stdev= 0.00, samples=1 00:20:23.202 lat (usec) : 50=0.91%, 100=64.79%, 250=22.21%, 500=12.09% 00:20:23.203 cpu : usr=4.90%, sys=13.90%, ctx=7146, majf=0, minf=1 00:20:23.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:23.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.203 issued rwts: total=3561,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:23.203 job2: (groupid=0, jobs=1): err= 0: pid=551928: Wed Nov 20 12:47:55 2024 00:20:23.203 read: IOPS=3819, BW=14.9MiB/s (15.6MB/s)(14.9MiB/1001msec) 00:20:23.203 slat (nsec): min=5904, max=51370, avg=12016.03, stdev=10133.22 00:20:23.203 clat (usec): min=51, max=420, avg=121.52, stdev=73.00 00:20:23.203 lat (usec): min=57, max=427, avg=133.54, stdev=80.36 00:20:23.203 clat percentiles (usec): 00:20:23.203 | 1.00th=[ 56], 5.00th=[ 62], 10.00th=[ 68], 20.00th=[ 73], 00:20:23.203 | 30.00th=[ 76], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 95], 00:20:23.203 | 70.00th=[ 108], 80.00th=[ 196], 90.00th=[ 245], 95.00th=[ 269], 00:20:23.203 | 99.00th=[ 330], 99.50th=[ 367], 99.90th=[ 404], 99.95th=[ 412], 00:20:23.203 | 99.99th=[ 420] 00:20:23.203 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:23.203 slat (nsec): min=7711, max=67715, avg=11704.31, stdev=8416.04 00:20:23.203 clat (usec): min=50, max=423, avg=101.55, stdev=56.76 00:20:23.203 lat (usec): min=58, max=463, avg=113.26, stdev=62.56 00:20:23.203 clat percentiles (usec): 00:20:23.203 | 1.00th=[ 56], 5.00th=[ 63], 10.00th=[ 66], 20.00th=[ 69], 00:20:23.203 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 79], 60.00th=[ 87], 00:20:23.203 | 70.00th=[ 93], 80.00th=[ 103], 90.00th=[ 202], 95.00th=[ 237], 00:20:23.203 | 99.00th=[ 281], 99.50th=[ 314], 99.90th=[ 371], 99.95th=[ 408], 00:20:23.203 | 99.99th=[ 424] 00:20:23.203 bw ( KiB/s): min=16384, max=16384, per=23.87%, avg=16384.00, stdev= 0.00, samples=1 00:20:23.203 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:23.203 lat (usec) : 100=72.28%, 250=21.57%, 500=6.15% 00:20:23.203 cpu : usr=6.90%, sys=12.80%, ctx=7919, majf=0, minf=1 00:20:23.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:23.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.203 issued rwts: total=3823,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:23.203 job3: (groupid=0, jobs=1): err= 0: pid=551929: Wed Nov 20 12:47:55 2024 00:20:23.203 read: IOPS=4255, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1001msec) 00:20:23.203 slat (nsec): min=5921, max=49675, avg=9381.10, stdev=7845.63 00:20:23.203 clat (usec): min=45, max=462, avg=94.67, stdev=72.72 00:20:23.203 lat (usec): min=57, max=468, avg=104.05, stdev=78.59 00:20:23.203 clat percentiles (usec): 00:20:23.203 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 59], 00:20:23.203 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 67], 00:20:23.203 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 235], 95.00th=[ 269], 00:20:23.203 | 99.00th=[ 351], 99.50th=[ 379], 99.90th=[ 437], 99.95th=[ 453], 00:20:23.203 | 99.99th=[ 461] 00:20:23.203 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:20:23.203 slat (nsec): min=7945, max=86713, avg=12583.92, stdev=9105.26 00:20:23.203 clat (usec): min=48, max=454, avg=102.43, stdev=83.15 00:20:23.203 lat (usec): min=56, max=462, avg=115.01, stdev=89.95 00:20:23.203 clat percentiles (usec): 00:20:23.203 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 58], 00:20:23.203 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 66], 00:20:23.203 | 70.00th=[ 74], 80.00th=[ 120], 90.00th=[ 258], 95.00th=[ 293], 00:20:23.203 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 420], 99.95th=[ 429], 00:20:23.203 | 99.99th=[ 453] 00:20:23.203 bw ( KiB/s): min=10440, max=10440, per=15.21%, avg=10440.00, stdev= 0.00, samples=1 00:20:23.203 iops : min= 2610, max= 2610, avg=2610.00, stdev= 0.00, samples=1 00:20:23.203 lat (usec) : 50=0.10%, 100=80.98%, 250=9.16%, 500=9.77% 00:20:23.203 cpu : usr=8.10%, sys=12.60%, ctx=8868, majf=0, minf=1 00:20:23.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:23.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.203 issued rwts: total=4260,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:23.203 00:20:23.203 Run status group 0 (all jobs): 00:20:23.203 READ: bw=63.4MiB/s (66.5MB/s), 13.9MiB/s-18.0MiB/s (14.6MB/s-18.9MB/s), io=63.5MiB (66.6MB), run=1001-1001msec 00:20:23.203 WRITE: bw=67.0MiB/s (70.3MB/s), 14.0MiB/s-19.1MiB/s (14.7MB/s-20.0MB/s), io=67.1MiB (70.3MB), run=1001-1001msec 00:20:23.203 00:20:23.203 Disk stats (read/write): 00:20:23.203 nvme0n1: ios=3978/4096, merge=0/0, ticks=335/313, in_queue=648, util=86.57% 00:20:23.203 nvme0n2: ios=2690/3072, merge=0/0, ticks=227/274, in_queue=501, util=86.51% 00:20:23.203 nvme0n3: ios=3239/3584, merge=0/0, ticks=275/279, in_queue=554, util=88.96% 00:20:23.203 nvme0n4: ios=3227/3584, merge=0/0, ticks=241/273, in_queue=514, util=89.70% 00:20:23.203 12:47:55 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:23.203 [global] 00:20:23.203 thread=1 00:20:23.203 invalidate=1 00:20:23.203 rw=write 00:20:23.203 time_based=1 00:20:23.203 runtime=1 00:20:23.203 ioengine=libaio 00:20:23.203 direct=1 00:20:23.203 bs=4096 00:20:23.203 iodepth=128 00:20:23.203 norandommap=0 00:20:23.203 numjobs=1 00:20:23.203 00:20:23.203 verify_dump=1 00:20:23.203 verify_backlog=512 00:20:23.203 verify_state_save=0 00:20:23.203 do_verify=1 00:20:23.203 verify=crc32c-intel 00:20:23.203 [job0] 00:20:23.203 filename=/dev/nvme0n1 00:20:23.203 [job1] 00:20:23.203 filename=/dev/nvme0n2 00:20:23.203 [job2] 00:20:23.203 filename=/dev/nvme0n3 00:20:23.203 [job3] 00:20:23.203 filename=/dev/nvme0n4 00:20:23.203 Could not set queue depth (nvme0n1) 00:20:23.203 Could not set queue depth (nvme0n2) 00:20:23.203 Could not set queue depth (nvme0n3) 00:20:23.203 Could not set queue depth (nvme0n4) 00:20:23.467 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:23.467 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:23.467 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:23.467 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:23.467 fio-3.35 00:20:23.467 Starting 4 threads 00:20:24.876 00:20:24.876 job0: (groupid=0, jobs=1): err= 0: pid=552458: Wed Nov 20 12:47:57 2024 00:20:24.876 read: IOPS=9689, BW=37.8MiB/s (39.7MB/s)(38.0MiB/1004msec) 00:20:24.876 slat (nsec): min=1153, max=1004.0k, avg=49584.39, stdev=141425.39 00:20:24.876 clat (usec): min=2637, max=14456, avg=6406.52, stdev=4170.10 00:20:24.876 lat (usec): min=2642, max=14487, avg=6456.10, stdev=4202.30 00:20:24.876 clat percentiles (usec): 00:20:24.876 | 1.00th=[ 3261], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3654], 00:20:24.876 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 4015], 60.00th=[ 4228], 00:20:24.876 | 70.00th=[ 4686], 80.00th=[13042], 90.00th=[13435], 95.00th=[13566], 00:20:24.876 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14091], 99.95th=[14353], 00:20:24.876 | 99.99th=[14484] 00:20:24.876 write: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(39.3MiB/1004msec); 0 zone resets 00:20:24.876 slat (nsec): min=1671, max=1236.1k, avg=49066.39, stdev=133116.78 00:20:24.876 clat (usec): min=2408, max=13721, avg=6427.73, stdev=4042.19 00:20:24.876 lat (usec): min=2416, max=13723, avg=6476.80, stdev=4073.36 00:20:24.876 clat percentiles (usec): 00:20:24.876 | 1.00th=[ 2868], 5.00th=[ 3261], 10.00th=[ 3359], 20.00th=[ 3523], 00:20:24.876 | 30.00th=[ 3621], 40.00th=[ 3752], 50.00th=[ 3916], 60.00th=[ 4228], 00:20:24.876 | 70.00th=[11338], 80.00th=[12387], 90.00th=[12649], 95.00th=[12780], 00:20:24.876 | 99.00th=[13173], 99.50th=[13173], 99.90th=[13566], 99.95th=[13698], 00:20:24.876 | 99.99th=[13698] 00:20:24.876 bw ( KiB/s): min=20480, max=58936, per=39.22%, avg=39708.00, stdev=27192.50, samples=2 00:20:24.876 iops : min= 5120, max=14734, avg=9927.00, stdev=6798.12, samples=2 00:20:24.876 lat (msec) : 4=50.93%, 10=20.72%, 20=28.35% 00:20:24.876 cpu : usr=3.09%, sys=7.78%, ctx=2909, majf=0, minf=1 00:20:24.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:24.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:24.876 issued rwts: total=9728,10054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:24.876 job1: (groupid=0, jobs=1): err= 0: pid=552459: Wed Nov 20 12:47:57 2024 00:20:24.876 read: IOPS=4748, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1004msec) 00:20:24.876 slat (nsec): min=1208, max=3607.5k, avg=103755.17, stdev=218489.57 00:20:24.876 clat (usec): min=3174, max=21898, avg=13227.48, stdev=1067.46 00:20:24.876 lat (usec): min=3828, max=21904, avg=13331.24, stdev=1055.57 00:20:24.876 clat percentiles (usec): 00:20:24.876 | 1.00th=[11600], 5.00th=[12518], 10.00th=[12649], 20.00th=[12911], 00:20:24.876 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:20:24.876 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13566], 95.00th=[13960], 00:20:24.876 | 99.00th=[18220], 99.50th=[19006], 99.90th=[21890], 99.95th=[21890], 00:20:24.876 | 99.99th=[21890] 00:20:24.876 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:20:24.876 slat (nsec): min=1680, max=3819.6k, avg=95881.62, stdev=207833.08 00:20:24.876 clat (usec): min=6529, max=19294, avg=12471.78, stdev=1145.68 00:20:24.876 lat (usec): min=7068, max=19302, avg=12567.66, stdev=1138.75 00:20:24.876 clat percentiles (usec): 00:20:24.876 | 1.00th=[ 9503], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:20:24.876 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:20:24.876 | 70.00th=[12649], 80.00th=[12649], 90.00th=[12911], 95.00th=[13173], 00:20:24.876 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:20:24.876 | 99.99th=[19268] 00:20:24.876 bw ( KiB/s): min=20480, max=20480, per=20.23%, avg=20480.00, stdev= 0.00, samples=2 00:20:24.876 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:20:24.876 lat (msec) : 4=0.05%, 10=1.15%, 20=98.57%, 50=0.22% 00:20:24.876 cpu : usr=2.59%, sys=5.38%, ctx=3086, majf=0, minf=1 00:20:24.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:24.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:24.876 issued rwts: total=4767,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:24.876 job2: (groupid=0, jobs=1): err= 0: pid=552460: Wed Nov 20 12:47:57 2024 00:20:24.876 read: IOPS=4951, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1002msec) 00:20:24.876 slat (nsec): min=1286, max=988956, avg=100236.65, stdev=192419.65 00:20:24.876 clat (usec): min=1575, max=14739, avg=12838.51, stdev=1424.09 00:20:24.876 lat (usec): min=2150, max=14742, avg=12938.75, stdev=1423.55 00:20:24.876 clat percentiles (usec): 00:20:24.876 | 1.00th=[ 5538], 5.00th=[ 9503], 10.00th=[12387], 20.00th=[12780], 00:20:24.876 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:20:24.876 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:20:24.876 | 99.00th=[14091], 99.50th=[14353], 99.90th=[14615], 99.95th=[14615], 00:20:24.876 | 99.99th=[14746] 00:20:24.876 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:20:24.876 slat (nsec): min=1774, max=1272.4k, avg=94986.11, stdev=181524.48 00:20:24.876 clat (usec): min=8120, max=14268, avg=12313.34, stdev=899.46 00:20:24.876 lat (usec): min=8622, max=14426, avg=12408.33, stdev=896.65 00:20:24.876 clat percentiles (usec): 00:20:24.876 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[11863], 20.00th=[12125], 00:20:24.876 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12518], 00:20:24.876 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[13173], 00:20:24.876 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14091], 99.95th=[14222], 00:20:24.876 | 99.99th=[14222] 00:20:24.876 bw ( KiB/s): min=20480, max=20480, per=20.23%, avg=20480.00, stdev= 0.00, samples=2 00:20:24.876 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:20:24.876 lat (msec) : 2=0.01%, 4=0.32%, 10=5.57%, 20=94.10% 00:20:24.876 cpu : usr=2.90%, sys=6.49%, ctx=2809, majf=0, minf=1 00:20:24.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:24.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:24.876 issued rwts: total=4961,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:24.876 job3: (groupid=0, jobs=1): err= 0: pid=552461: Wed Nov 20 12:47:57 2024 00:20:24.876 read: IOPS=4942, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1002msec) 00:20:24.876 slat (nsec): min=1270, max=1300.3k, avg=100563.12, stdev=200409.19 00:20:24.876 clat (usec): min=1604, max=14688, avg=12842.87, stdev=1433.68 00:20:24.876 lat (usec): min=2174, max=14710, avg=12943.44, stdev=1432.30 00:20:24.876 clat percentiles (usec): 00:20:24.876 | 1.00th=[ 5604], 5.00th=[ 9503], 10.00th=[12256], 20.00th=[12780], 00:20:24.876 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:20:24.876 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13566], 95.00th=[13829], 00:20:24.876 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14615], 99.95th=[14615], 00:20:24.876 | 99.99th=[14746] 00:20:24.876 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:20:24.876 slat (nsec): min=1790, max=1226.8k, avg=94806.70, stdev=186781.91 00:20:24.876 clat (usec): min=8022, max=14450, avg=12333.33, stdev=922.64 00:20:24.876 lat (usec): min=8576, max=14459, avg=12428.13, stdev=919.01 00:20:24.876 clat percentiles (usec): 00:20:24.876 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[11863], 20.00th=[12125], 00:20:24.876 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:20:24.876 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[13173], 00:20:24.876 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14091], 99.95th=[14222], 00:20:24.876 | 99.99th=[14484] 00:20:24.876 bw ( KiB/s): min=20480, max=20480, per=20.23%, avg=20480.00, stdev= 0.00, samples=2 00:20:24.876 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:20:24.876 lat (msec) : 2=0.01%, 4=0.32%, 10=5.70%, 20=93.97% 00:20:24.876 cpu : usr=2.80%, sys=6.59%, ctx=2954, majf=0, minf=1 00:20:24.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:24.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:24.876 issued rwts: total=4952,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:24.876 00:20:24.876 Run status group 0 (all jobs): 00:20:24.876 READ: bw=95.0MiB/s (99.6MB/s), 18.5MiB/s-37.8MiB/s (19.4MB/s-39.7MB/s), io=95.3MiB (100.0MB), run=1002-1004msec 00:20:24.876 WRITE: bw=98.9MiB/s (104MB/s), 19.9MiB/s-39.1MiB/s (20.9MB/s-41.0MB/s), io=99.3MiB (104MB), run=1002-1004msec 00:20:24.876 00:20:24.876 Disk stats (read/write): 00:20:24.876 nvme0n1: ios=7262/7680, merge=0/0, ticks=12619/13032, in_queue=25651, util=86.27% 00:20:24.876 nvme0n2: ios=4096/4332, merge=0/0, ticks=13209/12983, in_queue=26192, util=86.66% 00:20:24.876 nvme0n3: ios=4096/4284, merge=0/0, ticks=13160/12808, in_queue=25968, util=88.92% 00:20:24.876 nvme0n4: ios=4096/4273, merge=0/0, ticks=13152/12780, in_queue=25932, util=89.67% 00:20:24.876 12:47:57 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:24.876 [global] 00:20:24.876 thread=1 00:20:24.876 invalidate=1 00:20:24.876 rw=randwrite 00:20:24.876 time_based=1 00:20:24.876 runtime=1 00:20:24.876 ioengine=libaio 00:20:24.876 direct=1 00:20:24.876 bs=4096 00:20:24.876 iodepth=128 00:20:24.876 norandommap=0 00:20:24.876 numjobs=1 00:20:24.876 00:20:24.876 verify_dump=1 00:20:24.876 verify_backlog=512 00:20:24.876 verify_state_save=0 00:20:24.876 do_verify=1 00:20:24.876 verify=crc32c-intel 00:20:24.876 [job0] 00:20:24.877 filename=/dev/nvme0n1 00:20:24.877 [job1] 00:20:24.877 filename=/dev/nvme0n2 00:20:24.877 [job2] 00:20:24.877 filename=/dev/nvme0n3 00:20:24.877 [job3] 00:20:24.877 filename=/dev/nvme0n4 00:20:24.877 Could not set queue depth (nvme0n1) 00:20:24.877 Could not set queue depth (nvme0n2) 00:20:24.877 Could not set queue depth (nvme0n3) 00:20:24.877 Could not set queue depth (nvme0n4) 00:20:25.137 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:25.137 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:25.137 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:25.137 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:25.137 fio-3.35 00:20:25.137 Starting 4 threads 00:20:26.535 00:20:26.535 job0: (groupid=0, jobs=1): err= 0: pid=552990: Wed Nov 20 12:47:59 2024 00:20:26.535 read: IOPS=15.3k, BW=59.9MiB/s (62.8MB/s)(60.0MiB/1002msec) 00:20:26.535 slat (nsec): min=1145, max=1383.7k, avg=31935.75, stdev=126421.65 00:20:26.535 clat (usec): min=2905, max=8782, avg=4175.31, stdev=1207.00 00:20:26.535 lat (usec): min=2930, max=8783, avg=4207.25, stdev=1213.02 00:20:26.535 clat percentiles (usec): 00:20:26.535 | 1.00th=[ 3228], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3589], 00:20:26.535 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3851], 60.00th=[ 3949], 00:20:26.535 | 70.00th=[ 4047], 80.00th=[ 4178], 90.00th=[ 4424], 95.00th=[ 8225], 00:20:26.535 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8717], 99.95th=[ 8717], 00:20:26.535 | 99.99th=[ 8717] 00:20:26.535 write: IOPS=15.6k, BW=60.9MiB/s (63.9MB/s)(61.1MiB/1002msec); 0 zone resets 00:20:26.535 slat (nsec): min=1613, max=1277.6k, avg=30666.58, stdev=120259.32 00:20:26.535 clat (usec): min=1303, max=9406, avg=4022.65, stdev=1260.11 00:20:26.535 lat (usec): min=2186, max=9408, avg=4053.32, stdev=1267.52 00:20:26.535 clat percentiles (usec): 00:20:26.535 | 1.00th=[ 3064], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3425], 00:20:26.535 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3752], 00:20:26.535 | 70.00th=[ 3851], 80.00th=[ 4015], 90.00th=[ 4293], 95.00th=[ 8094], 00:20:26.535 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 8979], 99.95th=[ 9372], 00:20:26.535 | 99.99th=[ 9372] 00:20:26.535 bw ( KiB/s): min=69536, max=69536, per=53.61%, avg=69536.00, stdev= 0.00, samples=1 00:20:26.535 iops : min=17384, max=17384, avg=17384.00, stdev= 0.00, samples=1 00:20:26.535 lat (msec) : 2=0.01%, 4=72.36%, 10=27.64% 00:20:26.535 cpu : usr=4.40%, sys=7.99%, ctx=2261, majf=0, minf=1 00:20:26.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:26.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.535 issued rwts: total=15360,15631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.535 job1: (groupid=0, jobs=1): err= 0: pid=552992: Wed Nov 20 12:47:59 2024 00:20:26.535 read: IOPS=5228, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1003msec) 00:20:26.535 slat (nsec): min=1184, max=1642.9k, avg=93771.20, stdev=209913.16 00:20:26.535 clat (usec): min=2103, max=14604, avg=12016.50, stdev=2056.35 00:20:26.535 lat (usec): min=2598, max=14613, avg=12110.27, stdev=2070.03 00:20:26.535 clat percentiles (usec): 00:20:26.535 | 1.00th=[ 5276], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8979], 00:20:26.535 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:20:26.535 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13304], 95.00th=[13435], 00:20:26.535 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14222], 99.95th=[14222], 00:20:26.535 | 99.99th=[14615] 00:20:26.535 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:20:26.535 slat (nsec): min=1635, max=1642.2k, avg=87699.30, stdev=197004.01 00:20:26.535 clat (usec): min=7172, max=14063, avg=11330.47, stdev=1810.79 00:20:26.535 lat (usec): min=7174, max=14087, avg=11418.17, stdev=1823.74 00:20:26.535 clat percentiles (usec): 00:20:26.535 | 1.00th=[ 7439], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8586], 00:20:26.535 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:20:26.535 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12780], 95.00th=[12911], 00:20:26.535 | 99.00th=[13173], 99.50th=[13173], 99.90th=[13304], 99.95th=[13435], 00:20:26.535 | 99.99th=[14091] 00:20:26.535 bw ( KiB/s): min=20480, max=24552, per=17.36%, avg=22516.00, stdev=2879.34, samples=2 00:20:26.535 iops : min= 5120, max= 6138, avg=5629.00, stdev=719.83, samples=2 00:20:26.535 lat (msec) : 4=0.23%, 10=22.13%, 20=77.64% 00:20:26.535 cpu : usr=1.50%, sys=6.49%, ctx=2296, majf=0, minf=1 00:20:26.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:26.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.535 issued rwts: total=5244,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.535 job2: (groupid=0, jobs=1): err= 0: pid=552993: Wed Nov 20 12:47:59 2024 00:20:26.535 read: IOPS=5281, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1002msec) 00:20:26.535 slat (nsec): min=1223, max=1701.4k, avg=92916.26, stdev=225020.85 00:20:26.535 clat (usec): min=1225, max=14583, avg=11905.64, stdev=2198.24 00:20:26.535 lat (usec): min=2178, max=14591, avg=11998.56, stdev=2210.35 00:20:26.535 clat percentiles (usec): 00:20:26.535 | 1.00th=[ 5276], 5.00th=[ 8029], 10.00th=[ 8225], 20.00th=[ 8586], 00:20:26.535 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13042], 00:20:26.535 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13304], 95.00th=[13435], 00:20:26.535 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14091], 99.95th=[14222], 00:20:26.535 | 99.99th=[14615] 00:20:26.535 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:20:26.535 slat (nsec): min=1684, max=1207.6k, avg=87708.23, stdev=213541.87 00:20:26.535 clat (usec): min=5073, max=13899, avg=11317.65, stdev=1842.79 00:20:26.535 lat (usec): min=5093, max=13910, avg=11405.36, stdev=1853.91 00:20:26.535 clat percentiles (usec): 00:20:26.535 | 1.00th=[ 7111], 5.00th=[ 8094], 10.00th=[ 8160], 20.00th=[ 8356], 00:20:26.535 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:20:26.535 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12780], 95.00th=[12911], 00:20:26.535 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13566], 99.95th=[13829], 00:20:26.535 | 99.99th=[13960] 00:20:26.535 bw ( KiB/s): min=20480, max=20480, per=15.79%, avg=20480.00, stdev= 0.00, samples=1 00:20:26.535 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:26.535 lat (msec) : 2=0.01%, 4=0.29%, 10=23.20%, 20=76.50% 00:20:26.535 cpu : usr=2.70%, sys=5.00%, ctx=2098, majf=0, minf=2 00:20:26.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:26.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.536 issued rwts: total=5292,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.536 job3: (groupid=0, jobs=1): err= 0: pid=552994: Wed Nov 20 12:47:59 2024 00:20:26.536 read: IOPS=5215, BW=20.4MiB/s (21.4MB/s)(20.4MiB/1003msec) 00:20:26.536 slat (nsec): min=1221, max=1174.0k, avg=93678.94, stdev=205045.25 00:20:26.536 clat (usec): min=2126, max=13999, avg=12020.30, stdev=2079.04 00:20:26.536 lat (usec): min=2532, max=14461, avg=12113.98, stdev=2094.03 00:20:26.536 clat percentiles (usec): 00:20:26.536 | 1.00th=[ 6194], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8848], 00:20:26.536 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:20:26.536 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13304], 95.00th=[13435], 00:20:26.536 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:20:26.536 | 99.99th=[13960] 00:20:26.536 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:20:26.536 slat (nsec): min=1663, max=1175.5k, avg=87966.94, stdev=192825.83 00:20:26.536 clat (usec): min=7151, max=13838, avg=11354.26, stdev=1858.36 00:20:26.536 lat (usec): min=7153, max=13864, avg=11442.22, stdev=1872.95 00:20:26.536 clat percentiles (usec): 00:20:26.536 | 1.00th=[ 7439], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8455], 00:20:26.536 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:20:26.536 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12780], 95.00th=[12911], 00:20:26.536 | 99.00th=[13173], 99.50th=[13173], 99.90th=[13435], 99.95th=[13566], 00:20:26.536 | 99.99th=[13829] 00:20:26.536 bw ( KiB/s): min=20480, max=24448, per=17.32%, avg=22464.00, stdev=2805.80, samples=2 00:20:26.536 iops : min= 5120, max= 6112, avg=5616.00, stdev=701.45, samples=2 00:20:26.536 lat (msec) : 4=0.25%, 10=22.26%, 20=77.49% 00:20:26.536 cpu : usr=2.30%, sys=5.79%, ctx=2205, majf=0, minf=1 00:20:26.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:26.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.536 issued rwts: total=5231,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.536 00:20:26.536 Run status group 0 (all jobs): 00:20:26.536 READ: bw=121MiB/s (127MB/s), 20.4MiB/s-59.9MiB/s (21.4MB/s-62.8MB/s), io=122MiB (127MB), run=1002-1003msec 00:20:26.536 WRITE: bw=127MiB/s (133MB/s), 21.9MiB/s-60.9MiB/s (23.0MB/s-63.9MB/s), io=127MiB (133MB), run=1002-1003msec 00:20:26.536 00:20:26.536 Disk stats (read/write): 00:20:26.536 nvme0n1: ios=13874/13904, merge=0/0, ticks=12269/11505, in_queue=23774, util=83.17% 00:20:26.536 nvme0n2: ios=4089/4096, merge=0/0, ticks=17140/16012, in_queue=33152, util=83.74% 00:20:26.536 nvme0n3: ios=4089/4096, merge=0/0, ticks=17202/16149, in_queue=33351, util=87.78% 00:20:26.536 nvme0n4: ios=4069/4096, merge=0/0, ticks=17154/16128, in_queue=33282, util=89.32% 00:20:26.536 12:47:59 -- target/fio.sh@55 -- # sync 00:20:26.536 12:47:59 -- target/fio.sh@59 -- # fio_pid=553326 00:20:26.536 12:47:59 -- target/fio.sh@61 -- # sleep 3 00:20:26.536 12:47:59 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:26.536 [global] 00:20:26.536 thread=1 00:20:26.536 invalidate=1 00:20:26.536 rw=read 00:20:26.536 time_based=1 00:20:26.536 runtime=10 00:20:26.536 ioengine=libaio 00:20:26.536 direct=1 00:20:26.536 bs=4096 00:20:26.536 iodepth=1 00:20:26.536 norandommap=1 00:20:26.536 numjobs=1 00:20:26.536 00:20:26.536 [job0] 00:20:26.536 filename=/dev/nvme0n1 00:20:26.536 [job1] 00:20:26.536 filename=/dev/nvme0n2 00:20:26.536 [job2] 00:20:26.536 filename=/dev/nvme0n3 00:20:26.536 [job3] 00:20:26.536 filename=/dev/nvme0n4 00:20:26.536 Could not set queue depth (nvme0n1) 00:20:26.536 Could not set queue depth (nvme0n2) 00:20:26.536 Could not set queue depth (nvme0n3) 00:20:26.536 Could not set queue depth (nvme0n4) 00:20:26.796 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:26.796 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:26.796 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:26.796 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:26.796 fio-3.35 00:20:26.796 Starting 4 threads 00:20:29.350 12:48:02 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:29.612 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=44556288, buflen=4096 00:20:29.612 fio: pid=553517, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:29.612 12:48:02 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:29.612 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=49545216, buflen=4096 00:20:29.612 fio: pid=553516, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:29.612 12:48:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:29.612 12:48:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:29.873 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=7360512, buflen=4096 00:20:29.873 fio: pid=553514, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:29.873 12:48:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:29.873 12:48:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:30.134 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16793600, buflen=4096 00:20:30.134 fio: pid=553515, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:30.134 12:48:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.134 12:48:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:30.134 00:20:30.135 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=553514: Wed Nov 20 12:48:03 2024 00:20:30.135 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(135MiB/2982msec) 00:20:30.135 slat (usec): min=3, max=15812, avg= 8.66, stdev=131.50 00:20:30.135 clat (usec): min=29, max=20760, avg=75.67, stdev=117.41 00:20:30.135 lat (usec): min=47, max=20790, avg=84.33, stdev=177.95 00:20:30.135 clat percentiles (usec): 00:20:30.135 | 1.00th=[ 48], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 62], 00:20:30.135 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 71], 00:20:30.135 | 70.00th=[ 73], 80.00th=[ 75], 90.00th=[ 80], 95.00th=[ 98], 00:20:30.135 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 363], 99.95th=[ 383], 00:20:30.135 | 99.99th=[ 420] 00:20:30.135 bw ( KiB/s): min=41888, max=55480, per=41.52%, avg=49254.40, stdev=5067.75, samples=5 00:20:30.135 iops : min=10472, max=13870, avg=12313.60, stdev=1266.94, samples=5 00:20:30.135 lat (usec) : 50=2.67%, 100=92.44%, 250=3.26%, 500=1.61%, 750=0.01% 00:20:30.135 lat (msec) : 50=0.01% 00:20:30.135 cpu : usr=4.80%, sys=13.82%, ctx=34570, majf=0, minf=1 00:20:30.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.135 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.135 issued rwts: total=34566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:30.135 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=553515: Wed Nov 20 12:48:03 2024 00:20:30.135 read: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(144MiB/3183msec) 00:20:30.135 slat (usec): min=2, max=12936, avg= 8.77, stdev=129.75 00:20:30.135 clat (usec): min=36, max=22396, avg=75.71, stdev=163.15 00:20:30.135 lat (usec): min=48, max=22402, avg=84.48, stdev=209.69 00:20:30.135 clat percentiles (usec): 00:20:30.135 | 1.00th=[ 48], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 60], 00:20:30.135 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 71], 00:20:30.135 | 70.00th=[ 73], 80.00th=[ 75], 90.00th=[ 80], 95.00th=[ 101], 00:20:30.135 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 367], 99.95th=[ 392], 00:20:30.135 | 99.99th=[ 570] 00:20:30.135 bw ( KiB/s): min=38520, max=51936, per=39.28%, avg=46600.00, stdev=4972.46, samples=6 00:20:30.135 iops : min= 9630, max=12984, avg=11650.00, stdev=1243.12, samples=6 00:20:30.135 lat (usec) : 50=4.03%, 100=90.91%, 250=3.20%, 500=1.84%, 750=0.01% 00:20:30.135 lat (msec) : 50=0.01% 00:20:30.135 cpu : usr=4.27%, sys=14.27%, ctx=36877, majf=0, minf=2 00:20:30.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.135 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.135 issued rwts: total=36869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:30.135 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=553516: Wed Nov 20 12:48:03 2024 00:20:30.135 read: IOPS=4332, BW=16.9MiB/s (17.7MB/s)(47.2MiB/2792msec) 00:20:30.135 slat (usec): min=5, max=8967, avg=20.48, stdev=95.98 00:20:30.135 clat (usec): min=42, max=20743, avg=205.67, stdev=208.16 00:20:30.135 lat (usec): min=54, max=20749, avg=226.16, stdev=230.65 00:20:30.135 clat percentiles (usec): 00:20:30.135 | 1.00th=[ 57], 5.00th=[ 65], 10.00th=[ 75], 20.00th=[ 96], 00:20:30.135 | 30.00th=[ 133], 40.00th=[ 196], 50.00th=[ 225], 60.00th=[ 235], 00:20:30.135 | 70.00th=[ 249], 80.00th=[ 273], 90.00th=[ 322], 95.00th=[ 359], 00:20:30.135 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[ 469], 99.95th=[ 482], 00:20:30.135 | 99.99th=[ 832] 00:20:30.135 bw ( KiB/s): min=14808, max=17520, per=13.43%, avg=15937.60, stdev=1143.88, samples=5 00:20:30.135 iops : min= 3702, max= 4380, avg=3984.40, stdev=285.97, samples=5 00:20:30.135 lat (usec) : 50=0.02%, 100=20.96%, 250=49.76%, 500=29.23%, 1000=0.01% 00:20:30.135 lat (msec) : 50=0.01% 00:20:30.135 cpu : usr=4.44%, sys=12.83%, ctx=12100, majf=0, minf=2 00:20:30.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.135 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.135 issued rwts: total=12097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:30.135 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=553517: Wed Nov 20 12:48:03 2024 00:20:30.135 read: IOPS=4141, BW=16.2MiB/s (17.0MB/s)(42.5MiB/2627msec) 00:20:30.135 slat (nsec): min=5641, max=73622, avg=20398.96, stdev=11659.72 00:20:30.135 clat (usec): min=48, max=874, avg=215.07, stdev=87.39 00:20:30.135 lat (usec): min=54, max=904, avg=235.47, stdev=91.24 00:20:30.135 clat percentiles (usec): 00:20:30.135 | 1.00th=[ 57], 5.00th=[ 70], 10.00th=[ 88], 20.00th=[ 117], 00:20:30.135 | 30.00th=[ 190], 40.00th=[ 210], 50.00th=[ 231], 60.00th=[ 239], 00:20:30.135 | 70.00th=[ 255], 80.00th=[ 277], 90.00th=[ 326], 95.00th=[ 363], 00:20:30.135 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 478], 99.95th=[ 482], 00:20:30.135 | 99.99th=[ 490] 00:20:30.135 bw ( KiB/s): min=14792, max=17584, per=13.46%, avg=15963.20, stdev=1166.09, samples=5 00:20:30.135 iops : min= 3698, max= 4396, avg=3990.80, stdev=291.52, samples=5 00:20:30.135 lat (usec) : 50=0.05%, 100=14.31%, 250=53.88%, 500=31.74%, 1000=0.01% 00:20:30.135 cpu : usr=4.65%, sys=12.68%, ctx=10882, majf=0, minf=2 00:20:30.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.135 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.135 issued rwts: total=10879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:30.135 00:20:30.135 Run status group 0 (all jobs): 00:20:30.135 READ: bw=116MiB/s (121MB/s), 16.2MiB/s-45.3MiB/s (17.0MB/s-47.5MB/s), io=369MiB (387MB), run=2627-3183msec 00:20:30.135 00:20:30.135 Disk stats (read/write): 00:20:30.135 nvme0n1: ios=33501/0, merge=0/0, ticks=2113/0, in_queue=2113, util=93.79% 00:20:30.135 nvme0n2: ios=35912/0, merge=0/0, ticks=2333/0, in_queue=2333, util=94.34% 00:20:30.135 nvme0n3: ios=10465/0, merge=0/0, ticks=1558/0, in_queue=1558, util=96.12% 00:20:30.135 nvme0n4: ios=10856/0, merge=0/0, ticks=1574/0, in_queue=1574, util=96.44% 00:20:30.396 12:48:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.396 12:48:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:30.396 12:48:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.396 12:48:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:30.657 12:48:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.657 12:48:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:30.919 12:48:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.919 12:48:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:30.919 12:48:03 -- target/fio.sh@69 -- # fio_status=0 00:20:30.919 12:48:03 -- target/fio.sh@70 -- # wait 553326 00:20:30.919 12:48:03 -- target/fio.sh@70 -- # fio_status=4 00:20:30.919 12:48:03 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:32.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:32.306 12:48:05 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:32.306 12:48:05 -- common/autotest_common.sh@1208 -- # local i=0 00:20:32.306 12:48:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:32.306 12:48:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:32.306 12:48:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:32.306 12:48:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:32.306 12:48:05 -- common/autotest_common.sh@1220 -- # return 0 00:20:32.306 12:48:05 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:32.306 12:48:05 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:32.306 nvmf hotplug test: fio failed as expected 00:20:32.306 12:48:05 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.306 12:48:05 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:32.306 12:48:05 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:32.306 12:48:05 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:32.306 12:48:05 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:32.306 12:48:05 -- target/fio.sh@91 -- # nvmftestfini 00:20:32.306 12:48:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:32.306 12:48:05 -- nvmf/common.sh@116 -- # sync 00:20:32.306 12:48:05 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:32.306 12:48:05 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:32.306 12:48:05 -- nvmf/common.sh@119 -- # set +e 00:20:32.306 12:48:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:32.306 12:48:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:32.306 rmmod nvme_rdma 00:20:32.306 rmmod nvme_fabrics 00:20:32.306 12:48:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:32.306 12:48:05 -- nvmf/common.sh@123 -- # set -e 00:20:32.306 12:48:05 -- nvmf/common.sh@124 -- # return 0 00:20:32.306 12:48:05 -- nvmf/common.sh@477 -- # '[' -n 549785 ']' 00:20:32.306 12:48:05 -- nvmf/common.sh@478 -- # killprocess 549785 00:20:32.306 12:48:05 -- common/autotest_common.sh@936 -- # '[' -z 549785 ']' 00:20:32.306 12:48:05 -- common/autotest_common.sh@940 -- # kill -0 549785 00:20:32.306 12:48:05 -- common/autotest_common.sh@941 -- # uname 00:20:32.306 12:48:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.306 12:48:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 549785 00:20:32.566 12:48:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:32.566 12:48:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:32.566 12:48:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 549785' 00:20:32.566 killing process with pid 549785 00:20:32.566 12:48:05 -- common/autotest_common.sh@955 -- # kill 549785 00:20:32.566 12:48:05 -- common/autotest_common.sh@960 -- # wait 549785 00:20:32.566 12:48:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:32.566 12:48:05 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:32.566 00:20:32.566 real 0m27.551s 00:20:32.566 user 2m31.880s 00:20:32.566 sys 0m10.335s 00:20:32.566 12:48:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:32.566 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:20:32.566 ************************************ 00:20:32.566 END TEST nvmf_fio_target 00:20:32.566 ************************************ 00:20:32.828 12:48:05 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:32.828 12:48:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:32.828 12:48:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:32.828 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:20:32.828 ************************************ 00:20:32.828 START TEST nvmf_bdevio 00:20:32.828 ************************************ 00:20:32.828 12:48:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:32.828 * Looking for test storage... 00:20:32.828 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:32.828 12:48:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:32.828 12:48:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:32.828 12:48:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:32.828 12:48:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:32.828 12:48:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:32.828 12:48:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:32.828 12:48:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:32.828 12:48:05 -- scripts/common.sh@335 -- # IFS=.-: 00:20:32.828 12:48:05 -- scripts/common.sh@335 -- # read -ra ver1 00:20:32.828 12:48:05 -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.828 12:48:05 -- scripts/common.sh@336 -- # read -ra ver2 00:20:32.828 12:48:05 -- scripts/common.sh@337 -- # local 'op=<' 00:20:32.828 12:48:05 -- scripts/common.sh@339 -- # ver1_l=2 00:20:32.828 12:48:05 -- scripts/common.sh@340 -- # ver2_l=1 00:20:32.828 12:48:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:32.828 12:48:05 -- scripts/common.sh@343 -- # case "$op" in 00:20:32.828 12:48:05 -- scripts/common.sh@344 -- # : 1 00:20:32.828 12:48:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:32.828 12:48:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.828 12:48:05 -- scripts/common.sh@364 -- # decimal 1 00:20:32.828 12:48:05 -- scripts/common.sh@352 -- # local d=1 00:20:32.828 12:48:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.828 12:48:05 -- scripts/common.sh@354 -- # echo 1 00:20:32.828 12:48:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:32.828 12:48:05 -- scripts/common.sh@365 -- # decimal 2 00:20:32.828 12:48:05 -- scripts/common.sh@352 -- # local d=2 00:20:32.828 12:48:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.828 12:48:05 -- scripts/common.sh@354 -- # echo 2 00:20:32.828 12:48:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:32.828 12:48:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:32.828 12:48:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:32.828 12:48:05 -- scripts/common.sh@367 -- # return 0 00:20:32.828 12:48:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.828 12:48:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.828 --rc genhtml_branch_coverage=1 00:20:32.828 --rc genhtml_function_coverage=1 00:20:32.828 --rc genhtml_legend=1 00:20:32.828 --rc geninfo_all_blocks=1 00:20:32.828 --rc geninfo_unexecuted_blocks=1 00:20:32.828 00:20:32.828 ' 00:20:32.828 12:48:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.828 --rc genhtml_branch_coverage=1 00:20:32.828 --rc genhtml_function_coverage=1 00:20:32.828 --rc genhtml_legend=1 00:20:32.828 --rc geninfo_all_blocks=1 00:20:32.828 --rc geninfo_unexecuted_blocks=1 00:20:32.828 00:20:32.828 ' 00:20:32.828 12:48:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.828 --rc genhtml_branch_coverage=1 00:20:32.828 --rc genhtml_function_coverage=1 00:20:32.828 --rc genhtml_legend=1 00:20:32.828 --rc geninfo_all_blocks=1 00:20:32.828 --rc geninfo_unexecuted_blocks=1 00:20:32.828 00:20:32.828 ' 00:20:32.828 12:48:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.828 --rc genhtml_branch_coverage=1 00:20:32.828 --rc genhtml_function_coverage=1 00:20:32.828 --rc genhtml_legend=1 00:20:32.828 --rc geninfo_all_blocks=1 00:20:32.828 --rc geninfo_unexecuted_blocks=1 00:20:32.828 00:20:32.828 ' 00:20:32.828 12:48:05 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.828 12:48:05 -- nvmf/common.sh@7 -- # uname -s 00:20:32.828 12:48:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.828 12:48:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.828 12:48:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.828 12:48:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.828 12:48:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.828 12:48:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.828 12:48:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.828 12:48:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.828 12:48:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.828 12:48:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.828 12:48:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:32.828 12:48:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:32.828 12:48:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.828 12:48:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.828 12:48:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.828 12:48:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:32.828 12:48:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.828 12:48:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.828 12:48:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.828 12:48:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.828 12:48:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.828 12:48:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.828 12:48:05 -- paths/export.sh@5 -- # export PATH 00:20:32.828 12:48:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.828 12:48:05 -- nvmf/common.sh@46 -- # : 0 00:20:32.828 12:48:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:32.828 12:48:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:32.828 12:48:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:32.828 12:48:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.828 12:48:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.828 12:48:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:32.828 12:48:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:32.828 12:48:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:32.829 12:48:05 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:32.829 12:48:05 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:32.829 12:48:05 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:32.829 12:48:05 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:32.829 12:48:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.829 12:48:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:32.829 12:48:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:32.829 12:48:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:32.829 12:48:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.829 12:48:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.829 12:48:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.829 12:48:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:32.829 12:48:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:32.829 12:48:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:32.829 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:20:40.976 12:48:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:40.976 12:48:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:40.976 12:48:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:40.976 12:48:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:40.976 12:48:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:40.976 12:48:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:40.976 12:48:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:40.976 12:48:12 -- nvmf/common.sh@294 -- # net_devs=() 00:20:40.976 12:48:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:40.976 12:48:12 -- nvmf/common.sh@295 -- # e810=() 00:20:40.976 12:48:12 -- nvmf/common.sh@295 -- # local -ga e810 00:20:40.976 12:48:12 -- nvmf/common.sh@296 -- # x722=() 00:20:40.976 12:48:12 -- nvmf/common.sh@296 -- # local -ga x722 00:20:40.976 12:48:12 -- nvmf/common.sh@297 -- # mlx=() 00:20:40.976 12:48:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:40.976 12:48:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.976 12:48:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:40.976 12:48:12 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:40.976 12:48:12 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:40.976 12:48:12 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:40.976 12:48:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:40.976 12:48:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:40.976 12:48:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:20:40.976 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:20:40.976 12:48:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.976 12:48:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:40.976 12:48:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:20:40.976 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:20:40.976 12:48:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.976 12:48:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:40.976 12:48:12 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:40.976 12:48:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:40.976 12:48:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.976 12:48:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:40.976 12:48:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.976 12:48:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:20:40.976 Found net devices under 0000:98:00.0: mlx_0_0 00:20:40.976 12:48:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.976 12:48:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:40.976 12:48:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.977 12:48:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:40.977 12:48:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.977 12:48:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:20:40.977 Found net devices under 0000:98:00.1: mlx_0_1 00:20:40.977 12:48:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.977 12:48:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:40.977 12:48:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:40.977 12:48:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:40.977 12:48:12 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:40.977 12:48:12 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:40.977 12:48:12 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:40.977 12:48:12 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:40.977 12:48:12 -- nvmf/common.sh@57 -- # uname 00:20:40.977 12:48:12 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:40.977 12:48:12 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:40.977 12:48:12 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:40.977 12:48:12 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:40.977 12:48:12 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:40.977 12:48:12 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:40.977 12:48:12 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:40.977 12:48:12 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:40.977 12:48:12 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:40.977 12:48:12 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:40.977 12:48:12 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:40.977 12:48:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.977 12:48:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:40.977 12:48:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:40.977 12:48:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.977 12:48:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:40.977 12:48:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.977 12:48:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.977 12:48:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.977 12:48:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:40.977 12:48:12 -- nvmf/common.sh@104 -- # continue 2 00:20:40.977 12:48:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.977 12:48:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.977 12:48:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.977 12:48:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.977 12:48:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.977 12:48:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:40.977 12:48:12 -- nvmf/common.sh@104 -- # continue 2 00:20:40.977 12:48:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:40.977 12:48:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:40.977 12:48:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:40.977 12:48:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:40.977 12:48:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.977 12:48:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.977 12:48:12 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:40.977 12:48:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:40.977 12:48:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:40.977 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.977 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:20:40.977 altname enp152s0f0np0 00:20:40.977 altname ens817f0np0 00:20:40.977 inet 192.168.100.8/24 scope global mlx_0_0 00:20:40.977 valid_lft forever preferred_lft forever 00:20:40.977 12:48:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:40.977 12:48:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:40.977 12:48:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:40.977 12:48:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:40.977 12:48:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.977 12:48:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.977 12:48:13 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:40.977 12:48:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:40.977 12:48:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:40.977 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.977 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:20:40.977 altname enp152s0f1np1 00:20:40.977 altname ens817f1np1 00:20:40.977 inet 192.168.100.9/24 scope global mlx_0_1 00:20:40.977 valid_lft forever preferred_lft forever 00:20:40.977 12:48:13 -- nvmf/common.sh@410 -- # return 0 00:20:40.977 12:48:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:40.977 12:48:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:40.977 12:48:13 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:40.977 12:48:13 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:40.977 12:48:13 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:40.977 12:48:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.977 12:48:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:40.977 12:48:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:40.977 12:48:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.977 12:48:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:40.977 12:48:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.977 12:48:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.977 12:48:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.977 12:48:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:40.977 12:48:13 -- nvmf/common.sh@104 -- # continue 2 00:20:40.977 12:48:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.977 12:48:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.977 12:48:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.977 12:48:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.977 12:48:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.977 12:48:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:40.977 12:48:13 -- nvmf/common.sh@104 -- # continue 2 00:20:40.977 12:48:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:40.977 12:48:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:40.977 12:48:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:40.977 12:48:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:40.977 12:48:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.977 12:48:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.977 12:48:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:40.977 12:48:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:40.977 12:48:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:40.977 12:48:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:40.977 12:48:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.977 12:48:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.977 12:48:13 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:40.977 192.168.100.9' 00:20:40.977 12:48:13 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:40.977 192.168.100.9' 00:20:40.977 12:48:13 -- nvmf/common.sh@445 -- # head -n 1 00:20:40.977 12:48:13 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:40.977 12:48:13 -- nvmf/common.sh@446 -- # tail -n +2 00:20:40.977 12:48:13 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:40.977 192.168.100.9' 00:20:40.977 12:48:13 -- nvmf/common.sh@446 -- # head -n 1 00:20:40.977 12:48:13 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:40.977 12:48:13 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:40.977 12:48:13 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:40.977 12:48:13 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:40.977 12:48:13 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:40.977 12:48:13 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:40.977 12:48:13 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:40.977 12:48:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:40.977 12:48:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.977 12:48:13 -- common/autotest_common.sh@10 -- # set +x 00:20:40.977 12:48:13 -- nvmf/common.sh@469 -- # nvmfpid=559036 00:20:40.977 12:48:13 -- nvmf/common.sh@470 -- # waitforlisten 559036 00:20:40.977 12:48:13 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:40.977 12:48:13 -- common/autotest_common.sh@829 -- # '[' -z 559036 ']' 00:20:40.977 12:48:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.977 12:48:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.977 12:48:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.977 12:48:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.977 12:48:13 -- common/autotest_common.sh@10 -- # set +x 00:20:40.977 [2024-11-20 12:48:13.174025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:40.977 [2024-11-20 12:48:13.174097] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.977 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.977 [2024-11-20 12:48:13.257871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.977 [2024-11-20 12:48:13.347511] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:40.977 [2024-11-20 12:48:13.347667] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.977 [2024-11-20 12:48:13.347678] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.977 [2024-11-20 12:48:13.347685] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.977 [2024-11-20 12:48:13.347847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:40.977 [2024-11-20 12:48:13.348013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:40.977 [2024-11-20 12:48:13.348220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:40.977 [2024-11-20 12:48:13.348221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.977 12:48:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.977 12:48:13 -- common/autotest_common.sh@862 -- # return 0 00:20:40.978 12:48:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:40.978 12:48:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.978 12:48:13 -- common/autotest_common.sh@10 -- # set +x 00:20:40.978 12:48:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.978 12:48:14 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:40.978 12:48:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.978 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:40.978 [2024-11-20 12:48:14.073545] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19eb0a0/0x19ef590) succeed. 00:20:41.239 [2024-11-20 12:48:14.089333] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19ec690/0x1a30c30) succeed. 00:20:41.239 12:48:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.239 12:48:14 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:41.239 12:48:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.239 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:41.239 Malloc0 00:20:41.239 12:48:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.239 12:48:14 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.239 12:48:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.239 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:41.239 12:48:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.239 12:48:14 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.239 12:48:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.239 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:41.239 12:48:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.239 12:48:14 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:41.239 12:48:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.239 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:41.239 [2024-11-20 12:48:14.301427] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:41.239 12:48:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.239 12:48:14 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:41.239 12:48:14 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:41.239 12:48:14 -- nvmf/common.sh@520 -- # config=() 00:20:41.239 12:48:14 -- nvmf/common.sh@520 -- # local subsystem config 00:20:41.239 12:48:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:41.239 12:48:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:41.239 { 00:20:41.239 "params": { 00:20:41.239 "name": "Nvme$subsystem", 00:20:41.239 "trtype": "$TEST_TRANSPORT", 00:20:41.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.239 "adrfam": "ipv4", 00:20:41.239 "trsvcid": "$NVMF_PORT", 00:20:41.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.239 "hdgst": ${hdgst:-false}, 00:20:41.239 "ddgst": ${ddgst:-false} 00:20:41.239 }, 00:20:41.239 "method": "bdev_nvme_attach_controller" 00:20:41.239 } 00:20:41.239 EOF 00:20:41.239 )") 00:20:41.239 12:48:14 -- nvmf/common.sh@542 -- # cat 00:20:41.239 12:48:14 -- nvmf/common.sh@544 -- # jq . 00:20:41.239 12:48:14 -- nvmf/common.sh@545 -- # IFS=, 00:20:41.239 12:48:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:41.239 "params": { 00:20:41.239 "name": "Nvme1", 00:20:41.239 "trtype": "rdma", 00:20:41.239 "traddr": "192.168.100.8", 00:20:41.239 "adrfam": "ipv4", 00:20:41.239 "trsvcid": "4420", 00:20:41.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.239 "hdgst": false, 00:20:41.239 "ddgst": false 00:20:41.239 }, 00:20:41.239 "method": "bdev_nvme_attach_controller" 00:20:41.239 }' 00:20:41.499 [2024-11-20 12:48:14.355433] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:41.499 [2024-11-20 12:48:14.355501] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid559193 ] 00:20:41.499 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.499 [2024-11-20 12:48:14.424453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:41.499 [2024-11-20 12:48:14.497457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.499 [2024-11-20 12:48:14.497576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.499 [2024-11-20 12:48:14.497579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.760 [2024-11-20 12:48:14.658974] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:41.760 [2024-11-20 12:48:14.659007] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:41.760 I/O targets: 00:20:41.760 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:41.760 00:20:41.760 00:20:41.760 CUnit - A unit testing framework for C - Version 2.1-3 00:20:41.760 http://cunit.sourceforge.net/ 00:20:41.760 00:20:41.760 00:20:41.760 Suite: bdevio tests on: Nvme1n1 00:20:41.760 Test: blockdev write read block ...passed 00:20:41.761 Test: blockdev write zeroes read block ...passed 00:20:41.761 Test: blockdev write zeroes read no split ...passed 00:20:41.761 Test: blockdev write zeroes read split ...passed 00:20:41.761 Test: blockdev write zeroes read split partial ...passed 00:20:41.761 Test: blockdev reset ...[2024-11-20 12:48:14.688996] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.761 [2024-11-20 12:48:14.720971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:41.761 [2024-11-20 12:48:14.759418] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:41.761 passed 00:20:41.761 Test: blockdev write read 8 blocks ...passed 00:20:41.761 Test: blockdev write read size > 128k ...passed 00:20:41.761 Test: blockdev write read invalid size ...passed 00:20:41.761 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:41.761 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:41.761 Test: blockdev write read max offset ...passed 00:20:41.761 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:41.761 Test: blockdev writev readv 8 blocks ...passed 00:20:41.761 Test: blockdev writev readv 30 x 1block ...passed 00:20:41.761 Test: blockdev writev readv block ...passed 00:20:41.761 Test: blockdev writev readv size > 128k ...passed 00:20:41.761 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:41.761 Test: blockdev comparev and writev ...[2024-11-20 12:48:14.764697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.761 [2024-11-20 12:48:14.764721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.761 [2024-11-20 12:48:14.764729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.761 [2024-11-20 12:48:14.764736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.761 [2024-11-20 12:48:14.764937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.761 [2024-11-20 12:48:14.764944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:41.761 [2024-11-20 12:48:14.764951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.761 [2024-11-20 12:48:14.764956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:41.761 [2024-11-20 12:48:14.765173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.761 [2024-11-20 12:48:14.765180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:41.761 [2024-11-20 12:48:14.765187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.761 [2024-11-20 12:48:14.765196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:41.761 [2024-11-20 12:48:14.765358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.761 [2024-11-20 12:48:14.765364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:41.761 [2024-11-20 12:48:14.765371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.761 [2024-11-20 12:48:14.765376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:41.761 passed 00:20:41.761 Test: blockdev nvme passthru rw ...passed 00:20:41.761 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:48:14.765978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.761 [2024-11-20 12:48:14.765989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:41.761 [2024-11-20 12:48:14.766026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.761 [2024-11-20 12:48:14.766032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:41.761 [2024-11-20 12:48:14.766070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.761 [2024-11-20 12:48:14.766075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:41.761 [2024-11-20 12:48:14.766115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.761 [2024-11-20 12:48:14.766121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:41.761 passed 00:20:41.761 Test: blockdev nvme admin passthru ...passed 00:20:41.761 Test: blockdev copy ...passed 00:20:41.761 00:20:41.761 Run Summary: Type Total Ran Passed Failed Inactive 00:20:41.761 suites 1 1 n/a 0 0 00:20:41.761 tests 23 23 23 0 0 00:20:41.761 asserts 152 152 152 0 n/a 00:20:41.761 00:20:41.761 Elapsed time = 0.218 seconds 00:20:42.022 12:48:14 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.022 12:48:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.022 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:42.022 12:48:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.022 12:48:14 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:42.022 12:48:14 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:42.022 12:48:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:42.022 12:48:14 -- nvmf/common.sh@116 -- # sync 00:20:42.022 12:48:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:42.022 12:48:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:42.022 12:48:14 -- nvmf/common.sh@119 -- # set +e 00:20:42.022 12:48:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:42.022 12:48:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:42.022 rmmod nvme_rdma 00:20:42.022 rmmod nvme_fabrics 00:20:42.022 12:48:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:42.022 12:48:14 -- nvmf/common.sh@123 -- # set -e 00:20:42.022 12:48:14 -- nvmf/common.sh@124 -- # return 0 00:20:42.022 12:48:14 -- nvmf/common.sh@477 -- # '[' -n 559036 ']' 00:20:42.022 12:48:14 -- nvmf/common.sh@478 -- # killprocess 559036 00:20:42.022 12:48:14 -- common/autotest_common.sh@936 -- # '[' -z 559036 ']' 00:20:42.022 12:48:14 -- common/autotest_common.sh@940 -- # kill -0 559036 00:20:42.022 12:48:14 -- common/autotest_common.sh@941 -- # uname 00:20:42.022 12:48:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.022 12:48:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 559036 00:20:42.022 12:48:15 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:42.022 12:48:15 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:42.022 12:48:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 559036' 00:20:42.022 killing process with pid 559036 00:20:42.022 12:48:15 -- common/autotest_common.sh@955 -- # kill 559036 00:20:42.022 12:48:15 -- common/autotest_common.sh@960 -- # wait 559036 00:20:42.282 12:48:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:42.282 12:48:15 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:42.282 00:20:42.282 real 0m9.655s 00:20:42.282 user 0m10.966s 00:20:42.282 sys 0m6.104s 00:20:42.282 12:48:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:42.282 12:48:15 -- common/autotest_common.sh@10 -- # set +x 00:20:42.282 ************************************ 00:20:42.282 END TEST nvmf_bdevio 00:20:42.282 ************************************ 00:20:42.543 12:48:15 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:20:42.544 12:48:15 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:42.544 12:48:15 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:42.544 12:48:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:42.544 12:48:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.544 12:48:15 -- common/autotest_common.sh@10 -- # set +x 00:20:42.544 ************************************ 00:20:42.544 START TEST nvmf_fuzz 00:20:42.544 ************************************ 00:20:42.544 12:48:15 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:42.544 * Looking for test storage... 00:20:42.544 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:42.544 12:48:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:42.544 12:48:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:42.544 12:48:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:42.544 12:48:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:42.544 12:48:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:42.544 12:48:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:42.544 12:48:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:42.544 12:48:15 -- scripts/common.sh@335 -- # IFS=.-: 00:20:42.544 12:48:15 -- scripts/common.sh@335 -- # read -ra ver1 00:20:42.544 12:48:15 -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.544 12:48:15 -- scripts/common.sh@336 -- # read -ra ver2 00:20:42.544 12:48:15 -- scripts/common.sh@337 -- # local 'op=<' 00:20:42.544 12:48:15 -- scripts/common.sh@339 -- # ver1_l=2 00:20:42.544 12:48:15 -- scripts/common.sh@340 -- # ver2_l=1 00:20:42.544 12:48:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:42.544 12:48:15 -- scripts/common.sh@343 -- # case "$op" in 00:20:42.544 12:48:15 -- scripts/common.sh@344 -- # : 1 00:20:42.544 12:48:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:42.544 12:48:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.544 12:48:15 -- scripts/common.sh@364 -- # decimal 1 00:20:42.544 12:48:15 -- scripts/common.sh@352 -- # local d=1 00:20:42.544 12:48:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.544 12:48:15 -- scripts/common.sh@354 -- # echo 1 00:20:42.544 12:48:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:42.544 12:48:15 -- scripts/common.sh@365 -- # decimal 2 00:20:42.544 12:48:15 -- scripts/common.sh@352 -- # local d=2 00:20:42.544 12:48:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.544 12:48:15 -- scripts/common.sh@354 -- # echo 2 00:20:42.544 12:48:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:42.544 12:48:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:42.544 12:48:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:42.544 12:48:15 -- scripts/common.sh@367 -- # return 0 00:20:42.544 12:48:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.544 12:48:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:42.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.544 --rc genhtml_branch_coverage=1 00:20:42.544 --rc genhtml_function_coverage=1 00:20:42.544 --rc genhtml_legend=1 00:20:42.544 --rc geninfo_all_blocks=1 00:20:42.544 --rc geninfo_unexecuted_blocks=1 00:20:42.544 00:20:42.544 ' 00:20:42.544 12:48:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:42.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.544 --rc genhtml_branch_coverage=1 00:20:42.544 --rc genhtml_function_coverage=1 00:20:42.544 --rc genhtml_legend=1 00:20:42.544 --rc geninfo_all_blocks=1 00:20:42.544 --rc geninfo_unexecuted_blocks=1 00:20:42.544 00:20:42.544 ' 00:20:42.544 12:48:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:42.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.544 --rc genhtml_branch_coverage=1 00:20:42.544 --rc genhtml_function_coverage=1 00:20:42.544 --rc genhtml_legend=1 00:20:42.544 --rc geninfo_all_blocks=1 00:20:42.544 --rc geninfo_unexecuted_blocks=1 00:20:42.544 00:20:42.544 ' 00:20:42.544 12:48:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:42.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.544 --rc genhtml_branch_coverage=1 00:20:42.544 --rc genhtml_function_coverage=1 00:20:42.544 --rc genhtml_legend=1 00:20:42.544 --rc geninfo_all_blocks=1 00:20:42.544 --rc geninfo_unexecuted_blocks=1 00:20:42.544 00:20:42.544 ' 00:20:42.544 12:48:15 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.544 12:48:15 -- nvmf/common.sh@7 -- # uname -s 00:20:42.544 12:48:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.544 12:48:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.544 12:48:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.544 12:48:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.544 12:48:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.544 12:48:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.544 12:48:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.544 12:48:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.544 12:48:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.544 12:48:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.544 12:48:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:42.544 12:48:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:42.544 12:48:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.544 12:48:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.544 12:48:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.544 12:48:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:42.544 12:48:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.544 12:48:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.544 12:48:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.544 12:48:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.544 12:48:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.544 12:48:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.544 12:48:15 -- paths/export.sh@5 -- # export PATH 00:20:42.544 12:48:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.544 12:48:15 -- nvmf/common.sh@46 -- # : 0 00:20:42.544 12:48:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:42.544 12:48:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:42.544 12:48:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:42.544 12:48:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.544 12:48:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.545 12:48:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:42.545 12:48:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:42.545 12:48:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:42.545 12:48:15 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:42.545 12:48:15 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:42.545 12:48:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.545 12:48:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:42.545 12:48:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:42.545 12:48:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:42.545 12:48:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.545 12:48:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.545 12:48:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.545 12:48:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:42.545 12:48:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:42.545 12:48:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:42.545 12:48:15 -- common/autotest_common.sh@10 -- # set +x 00:20:50.683 12:48:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:50.683 12:48:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:50.683 12:48:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:50.683 12:48:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:50.683 12:48:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:50.683 12:48:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:50.683 12:48:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:50.683 12:48:22 -- nvmf/common.sh@294 -- # net_devs=() 00:20:50.683 12:48:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:50.683 12:48:22 -- nvmf/common.sh@295 -- # e810=() 00:20:50.683 12:48:22 -- nvmf/common.sh@295 -- # local -ga e810 00:20:50.683 12:48:22 -- nvmf/common.sh@296 -- # x722=() 00:20:50.683 12:48:22 -- nvmf/common.sh@296 -- # local -ga x722 00:20:50.683 12:48:22 -- nvmf/common.sh@297 -- # mlx=() 00:20:50.683 12:48:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:50.683 12:48:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.683 12:48:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:50.683 12:48:22 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:50.683 12:48:22 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:50.683 12:48:22 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:50.683 12:48:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:50.683 12:48:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:50.683 12:48:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:20:50.683 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:20:50.683 12:48:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:50.683 12:48:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:50.683 12:48:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:20:50.683 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:20:50.683 12:48:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:50.683 12:48:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:50.683 12:48:22 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:50.683 12:48:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.683 12:48:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:50.683 12:48:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.683 12:48:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:20:50.683 Found net devices under 0000:98:00.0: mlx_0_0 00:20:50.683 12:48:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.683 12:48:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:50.683 12:48:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.683 12:48:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:50.683 12:48:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.683 12:48:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:20:50.683 Found net devices under 0000:98:00.1: mlx_0_1 00:20:50.683 12:48:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.683 12:48:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:50.683 12:48:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:50.683 12:48:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:50.683 12:48:22 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:50.683 12:48:22 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:50.683 12:48:22 -- nvmf/common.sh@57 -- # uname 00:20:50.683 12:48:22 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:50.683 12:48:22 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:50.683 12:48:22 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:50.683 12:48:22 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:50.683 12:48:22 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:50.683 12:48:22 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:50.683 12:48:22 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:50.683 12:48:22 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:50.683 12:48:22 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:50.683 12:48:22 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:50.683 12:48:22 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:50.683 12:48:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:50.683 12:48:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:50.683 12:48:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:50.683 12:48:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:50.683 12:48:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:50.683 12:48:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:50.683 12:48:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.684 12:48:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:50.684 12:48:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:50.684 12:48:22 -- nvmf/common.sh@104 -- # continue 2 00:20:50.684 12:48:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:50.684 12:48:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.684 12:48:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:50.684 12:48:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.684 12:48:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:50.684 12:48:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:50.684 12:48:22 -- nvmf/common.sh@104 -- # continue 2 00:20:50.684 12:48:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:50.684 12:48:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:50.684 12:48:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:50.684 12:48:22 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:50.684 12:48:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:50.684 12:48:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:50.684 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:50.684 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:20:50.684 altname enp152s0f0np0 00:20:50.684 altname ens817f0np0 00:20:50.684 inet 192.168.100.8/24 scope global mlx_0_0 00:20:50.684 valid_lft forever preferred_lft forever 00:20:50.684 12:48:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:50.684 12:48:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:50.684 12:48:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:50.684 12:48:22 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:50.684 12:48:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:50.684 12:48:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:50.684 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:50.684 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:20:50.684 altname enp152s0f1np1 00:20:50.684 altname ens817f1np1 00:20:50.684 inet 192.168.100.9/24 scope global mlx_0_1 00:20:50.684 valid_lft forever preferred_lft forever 00:20:50.684 12:48:22 -- nvmf/common.sh@410 -- # return 0 00:20:50.684 12:48:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:50.684 12:48:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:50.684 12:48:22 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:50.684 12:48:22 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:50.684 12:48:22 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:50.684 12:48:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:50.684 12:48:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:50.684 12:48:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:50.684 12:48:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:50.684 12:48:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:50.684 12:48:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:50.684 12:48:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.684 12:48:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:50.684 12:48:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:50.684 12:48:22 -- nvmf/common.sh@104 -- # continue 2 00:20:50.684 12:48:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:50.684 12:48:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.684 12:48:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:50.684 12:48:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.684 12:48:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:50.684 12:48:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:50.684 12:48:22 -- nvmf/common.sh@104 -- # continue 2 00:20:50.684 12:48:22 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:50.684 12:48:22 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:50.684 12:48:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:50.684 12:48:22 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:50.684 12:48:22 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:50.684 12:48:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:50.684 12:48:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:50.684 12:48:22 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:50.684 192.168.100.9' 00:20:50.684 12:48:22 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:50.684 192.168.100.9' 00:20:50.684 12:48:22 -- nvmf/common.sh@445 -- # head -n 1 00:20:50.684 12:48:22 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:50.684 12:48:22 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:50.684 192.168.100.9' 00:20:50.684 12:48:22 -- nvmf/common.sh@446 -- # head -n 1 00:20:50.684 12:48:22 -- nvmf/common.sh@446 -- # tail -n +2 00:20:50.684 12:48:22 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:50.684 12:48:22 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:50.684 12:48:22 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:50.684 12:48:22 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:50.684 12:48:22 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:50.684 12:48:22 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:50.684 12:48:22 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=563228 00:20:50.684 12:48:22 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:50.684 12:48:22 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:50.684 12:48:22 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 563228 00:20:50.684 12:48:22 -- common/autotest_common.sh@829 -- # '[' -z 563228 ']' 00:20:50.684 12:48:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.684 12:48:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.684 12:48:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.684 12:48:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.684 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:50.684 12:48:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.684 12:48:23 -- common/autotest_common.sh@862 -- # return 0 00:20:50.684 12:48:23 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:50.684 12:48:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.684 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:50.684 12:48:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.684 12:48:23 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:50.684 12:48:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.684 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:50.684 Malloc0 00:20:50.684 12:48:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.684 12:48:23 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:50.684 12:48:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.685 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:50.685 12:48:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.685 12:48:23 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:50.685 12:48:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.685 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:50.685 12:48:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.685 12:48:23 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:50.685 12:48:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.685 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:50.685 12:48:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.685 12:48:23 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:20:50.685 12:48:23 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:21:22.828 Fuzzing completed. Shutting down the fuzz application 00:21:22.828 00:21:22.828 Dumping successful admin opcodes: 00:21:22.828 8, 9, 10, 24, 00:21:22.828 Dumping successful io opcodes: 00:21:22.828 0, 9, 00:21:22.828 NS: 0x200003af1f00 I/O qp, Total commands completed: 1442344, total successful commands: 8502, random_seed: 2714262592 00:21:22.828 NS: 0x200003af1f00 admin qp, Total commands completed: 204073, total successful commands: 1642, random_seed: 1065344128 00:21:22.828 12:48:54 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:22.828 Fuzzing completed. Shutting down the fuzz application 00:21:22.828 00:21:22.828 Dumping successful admin opcodes: 00:21:22.828 24, 00:21:22.828 Dumping successful io opcodes: 00:21:22.828 00:21:22.828 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2787319547 00:21:22.828 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2787387805 00:21:22.828 12:48:55 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.828 12:48:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.828 12:48:55 -- common/autotest_common.sh@10 -- # set +x 00:21:22.828 12:48:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.828 12:48:55 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:22.828 12:48:55 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:22.828 12:48:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:22.828 12:48:55 -- nvmf/common.sh@116 -- # sync 00:21:22.828 12:48:55 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:22.828 12:48:55 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:22.828 12:48:55 -- nvmf/common.sh@119 -- # set +e 00:21:22.828 12:48:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:22.828 12:48:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:22.828 rmmod nvme_rdma 00:21:22.828 rmmod nvme_fabrics 00:21:22.828 12:48:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:22.828 12:48:55 -- nvmf/common.sh@123 -- # set -e 00:21:22.828 12:48:55 -- nvmf/common.sh@124 -- # return 0 00:21:22.828 12:48:55 -- nvmf/common.sh@477 -- # '[' -n 563228 ']' 00:21:22.828 12:48:55 -- nvmf/common.sh@478 -- # killprocess 563228 00:21:22.828 12:48:55 -- common/autotest_common.sh@936 -- # '[' -z 563228 ']' 00:21:22.828 12:48:55 -- common/autotest_common.sh@940 -- # kill -0 563228 00:21:22.828 12:48:55 -- common/autotest_common.sh@941 -- # uname 00:21:22.828 12:48:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.828 12:48:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 563228 00:21:22.828 12:48:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:22.828 12:48:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:22.828 12:48:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 563228' 00:21:22.828 killing process with pid 563228 00:21:22.828 12:48:55 -- common/autotest_common.sh@955 -- # kill 563228 00:21:22.828 12:48:55 -- common/autotest_common.sh@960 -- # wait 563228 00:21:22.828 12:48:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:22.828 12:48:55 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:22.828 12:48:55 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:22.828 00:21:22.828 real 0m40.344s 00:21:22.828 user 0m55.244s 00:21:22.828 sys 0m16.454s 00:21:22.828 12:48:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:22.828 12:48:55 -- common/autotest_common.sh@10 -- # set +x 00:21:22.828 ************************************ 00:21:22.828 END TEST nvmf_fuzz 00:21:22.828 ************************************ 00:21:22.828 12:48:55 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:22.828 12:48:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:22.828 12:48:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:22.828 12:48:55 -- common/autotest_common.sh@10 -- # set +x 00:21:22.828 ************************************ 00:21:22.828 START TEST nvmf_multiconnection 00:21:22.828 ************************************ 00:21:22.828 12:48:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:22.828 * Looking for test storage... 00:21:22.828 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:22.829 12:48:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:22.829 12:48:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:22.829 12:48:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:23.090 12:48:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:23.090 12:48:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:23.090 12:48:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:23.090 12:48:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:23.090 12:48:55 -- scripts/common.sh@335 -- # IFS=.-: 00:21:23.090 12:48:55 -- scripts/common.sh@335 -- # read -ra ver1 00:21:23.090 12:48:55 -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.090 12:48:55 -- scripts/common.sh@336 -- # read -ra ver2 00:21:23.090 12:48:55 -- scripts/common.sh@337 -- # local 'op=<' 00:21:23.090 12:48:55 -- scripts/common.sh@339 -- # ver1_l=2 00:21:23.090 12:48:55 -- scripts/common.sh@340 -- # ver2_l=1 00:21:23.090 12:48:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:23.090 12:48:55 -- scripts/common.sh@343 -- # case "$op" in 00:21:23.090 12:48:55 -- scripts/common.sh@344 -- # : 1 00:21:23.090 12:48:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:23.090 12:48:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.090 12:48:55 -- scripts/common.sh@364 -- # decimal 1 00:21:23.090 12:48:55 -- scripts/common.sh@352 -- # local d=1 00:21:23.090 12:48:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.090 12:48:55 -- scripts/common.sh@354 -- # echo 1 00:21:23.090 12:48:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:23.090 12:48:55 -- scripts/common.sh@365 -- # decimal 2 00:21:23.090 12:48:55 -- scripts/common.sh@352 -- # local d=2 00:21:23.090 12:48:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.090 12:48:55 -- scripts/common.sh@354 -- # echo 2 00:21:23.090 12:48:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:23.090 12:48:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:23.090 12:48:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:23.090 12:48:55 -- scripts/common.sh@367 -- # return 0 00:21:23.090 12:48:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.090 12:48:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.090 --rc genhtml_branch_coverage=1 00:21:23.090 --rc genhtml_function_coverage=1 00:21:23.090 --rc genhtml_legend=1 00:21:23.090 --rc geninfo_all_blocks=1 00:21:23.090 --rc geninfo_unexecuted_blocks=1 00:21:23.090 00:21:23.090 ' 00:21:23.090 12:48:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.090 --rc genhtml_branch_coverage=1 00:21:23.090 --rc genhtml_function_coverage=1 00:21:23.090 --rc genhtml_legend=1 00:21:23.090 --rc geninfo_all_blocks=1 00:21:23.090 --rc geninfo_unexecuted_blocks=1 00:21:23.090 00:21:23.090 ' 00:21:23.090 12:48:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.090 --rc genhtml_branch_coverage=1 00:21:23.090 --rc genhtml_function_coverage=1 00:21:23.090 --rc genhtml_legend=1 00:21:23.090 --rc geninfo_all_blocks=1 00:21:23.090 --rc geninfo_unexecuted_blocks=1 00:21:23.090 00:21:23.090 ' 00:21:23.090 12:48:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.090 --rc genhtml_branch_coverage=1 00:21:23.090 --rc genhtml_function_coverage=1 00:21:23.090 --rc genhtml_legend=1 00:21:23.090 --rc geninfo_all_blocks=1 00:21:23.090 --rc geninfo_unexecuted_blocks=1 00:21:23.090 00:21:23.090 ' 00:21:23.090 12:48:55 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.090 12:48:55 -- nvmf/common.sh@7 -- # uname -s 00:21:23.090 12:48:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.090 12:48:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.090 12:48:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.090 12:48:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.090 12:48:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.090 12:48:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.090 12:48:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.090 12:48:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.090 12:48:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.090 12:48:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.090 12:48:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:23.090 12:48:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:23.090 12:48:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.090 12:48:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.090 12:48:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:23.090 12:48:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:23.090 12:48:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.090 12:48:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.090 12:48:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.090 12:48:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.091 12:48:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.091 12:48:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.091 12:48:56 -- paths/export.sh@5 -- # export PATH 00:21:23.091 12:48:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.091 12:48:56 -- nvmf/common.sh@46 -- # : 0 00:21:23.091 12:48:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:23.091 12:48:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:23.091 12:48:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:23.091 12:48:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.091 12:48:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.091 12:48:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:23.091 12:48:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:23.091 12:48:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:23.091 12:48:56 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:23.091 12:48:56 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:23.091 12:48:56 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:23.091 12:48:56 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:23.091 12:48:56 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:23.091 12:48:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.091 12:48:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:23.091 12:48:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:23.091 12:48:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:23.091 12:48:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.091 12:48:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.091 12:48:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.091 12:48:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:23.091 12:48:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:23.091 12:48:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:23.091 12:48:56 -- common/autotest_common.sh@10 -- # set +x 00:21:31.242 12:49:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:31.242 12:49:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:31.242 12:49:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:31.242 12:49:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:31.242 12:49:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:31.242 12:49:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:31.242 12:49:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:31.242 12:49:03 -- nvmf/common.sh@294 -- # net_devs=() 00:21:31.242 12:49:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:31.242 12:49:03 -- nvmf/common.sh@295 -- # e810=() 00:21:31.242 12:49:03 -- nvmf/common.sh@295 -- # local -ga e810 00:21:31.242 12:49:03 -- nvmf/common.sh@296 -- # x722=() 00:21:31.242 12:49:03 -- nvmf/common.sh@296 -- # local -ga x722 00:21:31.242 12:49:03 -- nvmf/common.sh@297 -- # mlx=() 00:21:31.242 12:49:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:31.242 12:49:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.242 12:49:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:31.242 12:49:03 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:31.242 12:49:03 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:31.242 12:49:03 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:31.242 12:49:03 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:31.242 12:49:03 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:31.242 12:49:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:31.243 12:49:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:31.243 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:31.243 12:49:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:31.243 12:49:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:31.243 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:31.243 12:49:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:31.243 12:49:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:31.243 12:49:03 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.243 12:49:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:31.243 12:49:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.243 12:49:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:31.243 Found net devices under 0000:98:00.0: mlx_0_0 00:21:31.243 12:49:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.243 12:49:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.243 12:49:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:31.243 12:49:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.243 12:49:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:31.243 Found net devices under 0000:98:00.1: mlx_0_1 00:21:31.243 12:49:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.243 12:49:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:31.243 12:49:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:31.243 12:49:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:31.243 12:49:03 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:31.243 12:49:03 -- nvmf/common.sh@57 -- # uname 00:21:31.243 12:49:03 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:31.243 12:49:03 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:31.243 12:49:03 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:31.243 12:49:03 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:31.243 12:49:03 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:31.243 12:49:03 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:31.243 12:49:03 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:31.243 12:49:03 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:31.243 12:49:03 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:31.243 12:49:03 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:31.243 12:49:03 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:31.243 12:49:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:31.243 12:49:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:31.243 12:49:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:31.243 12:49:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:31.243 12:49:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:31.243 12:49:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:31.243 12:49:03 -- nvmf/common.sh@104 -- # continue 2 00:21:31.243 12:49:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:31.243 12:49:03 -- nvmf/common.sh@104 -- # continue 2 00:21:31.243 12:49:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:31.243 12:49:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:31.243 12:49:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:31.243 12:49:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:31.243 12:49:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:31.243 12:49:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:31.243 12:49:03 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:31.243 12:49:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:31.243 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:31.243 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:21:31.243 altname enp152s0f0np0 00:21:31.243 altname ens817f0np0 00:21:31.243 inet 192.168.100.8/24 scope global mlx_0_0 00:21:31.243 valid_lft forever preferred_lft forever 00:21:31.243 12:49:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:31.243 12:49:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:31.243 12:49:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:31.243 12:49:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:31.243 12:49:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:31.243 12:49:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:31.243 12:49:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:31.243 12:49:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:31.243 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:31.243 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:21:31.243 altname enp152s0f1np1 00:21:31.243 altname ens817f1np1 00:21:31.243 inet 192.168.100.9/24 scope global mlx_0_1 00:21:31.243 valid_lft forever preferred_lft forever 00:21:31.243 12:49:03 -- nvmf/common.sh@410 -- # return 0 00:21:31.243 12:49:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:31.243 12:49:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:31.243 12:49:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:31.243 12:49:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:31.243 12:49:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:31.243 12:49:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:31.243 12:49:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:31.243 12:49:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:31.243 12:49:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:31.243 12:49:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:31.243 12:49:03 -- nvmf/common.sh@104 -- # continue 2 00:21:31.243 12:49:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.243 12:49:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:31.243 12:49:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:31.243 12:49:03 -- nvmf/common.sh@104 -- # continue 2 00:21:31.244 12:49:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:31.244 12:49:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:31.244 12:49:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:31.244 12:49:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:31.244 12:49:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:31.244 12:49:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:31.244 12:49:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:31.244 12:49:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:31.244 12:49:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:31.244 12:49:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:31.244 12:49:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:31.244 12:49:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:31.244 12:49:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:31.244 192.168.100.9' 00:21:31.244 12:49:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:31.244 192.168.100.9' 00:21:31.244 12:49:03 -- nvmf/common.sh@445 -- # head -n 1 00:21:31.244 12:49:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:31.244 12:49:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:31.244 192.168.100.9' 00:21:31.244 12:49:03 -- nvmf/common.sh@446 -- # tail -n +2 00:21:31.244 12:49:03 -- nvmf/common.sh@446 -- # head -n 1 00:21:31.244 12:49:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:31.244 12:49:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:31.244 12:49:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:31.244 12:49:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:31.244 12:49:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:31.244 12:49:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:31.244 12:49:03 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:31.244 12:49:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:31.244 12:49:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.244 12:49:03 -- common/autotest_common.sh@10 -- # set +x 00:21:31.244 12:49:03 -- nvmf/common.sh@469 -- # nvmfpid=573357 00:21:31.244 12:49:03 -- nvmf/common.sh@470 -- # waitforlisten 573357 00:21:31.244 12:49:03 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:31.244 12:49:03 -- common/autotest_common.sh@829 -- # '[' -z 573357 ']' 00:21:31.244 12:49:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.244 12:49:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.244 12:49:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.244 12:49:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.244 12:49:03 -- common/autotest_common.sh@10 -- # set +x 00:21:31.244 [2024-11-20 12:49:03.322245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:31.244 [2024-11-20 12:49:03.322303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.244 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.244 [2024-11-20 12:49:03.384223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.244 [2024-11-20 12:49:03.449266] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:31.244 [2024-11-20 12:49:03.449396] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.244 [2024-11-20 12:49:03.449406] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.244 [2024-11-20 12:49:03.449414] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.244 [2024-11-20 12:49:03.449561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.244 [2024-11-20 12:49:03.449683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.244 [2024-11-20 12:49:03.449852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.244 [2024-11-20 12:49:03.449853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.244 12:49:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.244 12:49:04 -- common/autotest_common.sh@862 -- # return 0 00:21:31.244 12:49:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:31.244 12:49:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.244 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.244 12:49:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.244 12:49:04 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:31.244 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.244 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.244 [2024-11-20 12:49:04.175995] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14ba7f0/0x14bece0) succeed. 00:21:31.244 [2024-11-20 12:49:04.190448] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14bbde0/0x1500380) succeed. 00:21:31.244 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.244 12:49:04 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:31.244 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.244 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:31.244 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.244 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.244 Malloc1 00:21:31.244 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.244 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:31.244 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.244 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 [2024-11-20 12:49:04.374805] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.505 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 Malloc2 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.505 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 Malloc3 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.505 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 Malloc4 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.505 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:31.505 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.506 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.506 Malloc5 00:21:31.506 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.506 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:31.506 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.506 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.506 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.506 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:31.506 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.506 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.506 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.506 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:31.506 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.506 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.506 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.506 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.506 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:31.506 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.506 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.766 Malloc6 00:21:31.766 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.766 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:31.766 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.766 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.766 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.766 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:31.766 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.766 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.766 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.766 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:21:31.766 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.766 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.766 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.766 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.766 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:31.766 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.766 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.766 Malloc7 00:21:31.766 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.766 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:31.766 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.766 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.766 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.766 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:31.766 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.767 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 Malloc8 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.767 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 Malloc9 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.767 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 Malloc10 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.767 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.767 12:49:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.767 12:49:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:31.767 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.767 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:32.027 Malloc11 00:21:32.027 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.027 12:49:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:32.027 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.027 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:32.027 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.027 12:49:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:32.027 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.027 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:32.027 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.027 12:49:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:21:32.027 12:49:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.027 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:32.027 12:49:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.027 12:49:04 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:32.027 12:49:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:32.027 12:49:04 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:33.411 12:49:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:33.411 12:49:06 -- common/autotest_common.sh@1187 -- # local i=0 00:21:33.411 12:49:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:33.411 12:49:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:33.411 12:49:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:35.325 12:49:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:35.325 12:49:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:35.325 12:49:08 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:21:35.325 12:49:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:35.586 12:49:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:35.586 12:49:08 -- common/autotest_common.sh@1197 -- # return 0 00:21:35.586 12:49:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:35.586 12:49:08 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:36.970 12:49:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:36.970 12:49:09 -- common/autotest_common.sh@1187 -- # local i=0 00:21:36.970 12:49:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:36.970 12:49:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:36.970 12:49:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:38.880 12:49:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:38.880 12:49:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:38.880 12:49:11 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:21:38.880 12:49:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:38.880 12:49:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:38.880 12:49:11 -- common/autotest_common.sh@1197 -- # return 0 00:21:38.880 12:49:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.880 12:49:11 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:40.798 12:49:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:40.798 12:49:13 -- common/autotest_common.sh@1187 -- # local i=0 00:21:40.798 12:49:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:40.798 12:49:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:40.798 12:49:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:42.712 12:49:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:42.712 12:49:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:42.712 12:49:15 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:21:42.712 12:49:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:42.712 12:49:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:42.712 12:49:15 -- common/autotest_common.sh@1197 -- # return 0 00:21:42.712 12:49:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:42.712 12:49:15 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:44.095 12:49:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:44.095 12:49:16 -- common/autotest_common.sh@1187 -- # local i=0 00:21:44.095 12:49:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:44.095 12:49:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:44.095 12:49:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:46.008 12:49:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:46.008 12:49:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:46.008 12:49:18 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:21:46.008 12:49:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:46.008 12:49:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:46.008 12:49:18 -- common/autotest_common.sh@1197 -- # return 0 00:21:46.008 12:49:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.008 12:49:18 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:47.396 12:49:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:47.396 12:49:20 -- common/autotest_common.sh@1187 -- # local i=0 00:21:47.396 12:49:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:47.396 12:49:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:47.396 12:49:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:49.307 12:49:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:49.307 12:49:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:49.307 12:49:22 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:21:49.307 12:49:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:49.307 12:49:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:49.307 12:49:22 -- common/autotest_common.sh@1197 -- # return 0 00:21:49.307 12:49:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.307 12:49:22 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:21:51.221 12:49:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:51.221 12:49:23 -- common/autotest_common.sh@1187 -- # local i=0 00:21:51.221 12:49:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:51.221 12:49:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:51.221 12:49:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:53.134 12:49:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:53.134 12:49:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:53.134 12:49:25 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:21:53.134 12:49:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:53.134 12:49:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:53.134 12:49:25 -- common/autotest_common.sh@1197 -- # return 0 00:21:53.134 12:49:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.134 12:49:25 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:21:54.521 12:49:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:54.521 12:49:27 -- common/autotest_common.sh@1187 -- # local i=0 00:21:54.521 12:49:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:54.521 12:49:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:54.521 12:49:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:56.437 12:49:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:56.437 12:49:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:56.437 12:49:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:21:56.437 12:49:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:56.437 12:49:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:56.437 12:49:29 -- common/autotest_common.sh@1197 -- # return 0 00:21:56.437 12:49:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.437 12:49:29 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:21:57.822 12:49:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:57.822 12:49:30 -- common/autotest_common.sh@1187 -- # local i=0 00:21:57.822 12:49:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:57.822 12:49:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:57.822 12:49:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:59.736 12:49:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:59.736 12:49:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:59.736 12:49:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:21:59.736 12:49:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:59.736 12:49:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:59.736 12:49:32 -- common/autotest_common.sh@1197 -- # return 0 00:21:59.736 12:49:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.736 12:49:32 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:22:01.652 12:49:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:01.652 12:49:34 -- common/autotest_common.sh@1187 -- # local i=0 00:22:01.652 12:49:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.652 12:49:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:01.652 12:49:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:03.566 12:49:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:03.566 12:49:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:03.566 12:49:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:22:03.566 12:49:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:03.566 12:49:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:03.566 12:49:36 -- common/autotest_common.sh@1197 -- # return 0 00:22:03.566 12:49:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.566 12:49:36 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:22:04.952 12:49:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:04.952 12:49:37 -- common/autotest_common.sh@1187 -- # local i=0 00:22:04.952 12:49:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:04.952 12:49:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:04.952 12:49:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:06.865 12:49:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:06.865 12:49:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:06.865 12:49:39 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:22:06.865 12:49:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:06.865 12:49:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:06.865 12:49:39 -- common/autotest_common.sh@1197 -- # return 0 00:22:06.866 12:49:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.866 12:49:39 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:22:08.253 12:49:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:08.253 12:49:41 -- common/autotest_common.sh@1187 -- # local i=0 00:22:08.253 12:49:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:08.253 12:49:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:08.253 12:49:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:10.798 12:49:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:10.798 12:49:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:10.798 12:49:43 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:22:10.798 12:49:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:10.798 12:49:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:10.798 12:49:43 -- common/autotest_common.sh@1197 -- # return 0 00:22:10.798 12:49:43 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:10.798 [global] 00:22:10.798 thread=1 00:22:10.798 invalidate=1 00:22:10.798 rw=read 00:22:10.798 time_based=1 00:22:10.798 runtime=10 00:22:10.798 ioengine=libaio 00:22:10.798 direct=1 00:22:10.798 bs=262144 00:22:10.798 iodepth=64 00:22:10.798 norandommap=1 00:22:10.798 numjobs=1 00:22:10.798 00:22:10.798 [job0] 00:22:10.798 filename=/dev/nvme0n1 00:22:10.798 [job1] 00:22:10.798 filename=/dev/nvme10n1 00:22:10.798 [job2] 00:22:10.798 filename=/dev/nvme1n1 00:22:10.798 [job3] 00:22:10.798 filename=/dev/nvme2n1 00:22:10.798 [job4] 00:22:10.798 filename=/dev/nvme3n1 00:22:10.798 [job5] 00:22:10.798 filename=/dev/nvme4n1 00:22:10.798 [job6] 00:22:10.798 filename=/dev/nvme5n1 00:22:10.798 [job7] 00:22:10.798 filename=/dev/nvme6n1 00:22:10.798 [job8] 00:22:10.798 filename=/dev/nvme7n1 00:22:10.798 [job9] 00:22:10.798 filename=/dev/nvme8n1 00:22:10.798 [job10] 00:22:10.798 filename=/dev/nvme9n1 00:22:10.798 Could not set queue depth (nvme0n1) 00:22:10.798 Could not set queue depth (nvme10n1) 00:22:10.798 Could not set queue depth (nvme1n1) 00:22:10.798 Could not set queue depth (nvme2n1) 00:22:10.798 Could not set queue depth (nvme3n1) 00:22:10.798 Could not set queue depth (nvme4n1) 00:22:10.798 Could not set queue depth (nvme5n1) 00:22:10.798 Could not set queue depth (nvme6n1) 00:22:10.798 Could not set queue depth (nvme7n1) 00:22:10.798 Could not set queue depth (nvme8n1) 00:22:10.798 Could not set queue depth (nvme9n1) 00:22:11.059 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:11.060 fio-3.35 00:22:11.060 Starting 11 threads 00:22:23.307 00:22:23.307 job0: (groupid=0, jobs=1): err= 0: pid=581850: Wed Nov 20 12:49:54 2024 00:22:23.307 read: IOPS=1278, BW=320MiB/s (335MB/s)(3209MiB/10043msec) 00:22:23.307 slat (usec): min=6, max=18964, avg=775.96, stdev=2109.95 00:22:23.307 clat (usec): min=10051, max=93024, avg=49245.63, stdev=4994.81 00:22:23.307 lat (msec): min=10, max=104, avg=50.02, stdev= 5.39 00:22:23.307 clat percentiles (usec): 00:22:23.307 | 1.00th=[43254], 5.00th=[44303], 10.00th=[44827], 20.00th=[45351], 00:22:23.307 | 30.00th=[45876], 40.00th=[46400], 50.00th=[49546], 60.00th=[51119], 00:22:23.307 | 70.00th=[52167], 80.00th=[52691], 90.00th=[54264], 95.00th=[55837], 00:22:23.307 | 99.00th=[61080], 99.50th=[65274], 99.90th=[86508], 99.95th=[92799], 00:22:23.307 | 99.99th=[92799] 00:22:23.307 bw ( KiB/s): min=298922, max=357888, per=7.49%, avg=326898.55, stdev=23482.33, samples=20 00:22:23.307 iops : min= 1167, max= 1398, avg=1276.90, stdev=91.76, samples=20 00:22:23.307 lat (msec) : 20=0.31%, 50=51.06%, 100=48.63% 00:22:23.307 cpu : usr=0.39%, sys=3.87%, ctx=2721, majf=0, minf=4097 00:22:23.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:23.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.307 issued rwts: total=12835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.307 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.307 job1: (groupid=0, jobs=1): err= 0: pid=581856: Wed Nov 20 12:49:54 2024 00:22:23.307 read: IOPS=2076, BW=519MiB/s (544MB/s)(5208MiB/10031msec) 00:22:23.307 slat (usec): min=6, max=17033, avg=477.91, stdev=1326.98 00:22:23.307 clat (usec): min=8000, max=54807, avg=30302.72, stdev=4521.28 00:22:23.307 lat (usec): min=8221, max=54818, avg=30780.64, stdev=4711.55 00:22:23.307 clat percentiles (usec): 00:22:23.307 | 1.00th=[24511], 5.00th=[25035], 10.00th=[25560], 20.00th=[26346], 00:22:23.307 | 30.00th=[26870], 40.00th=[27132], 50.00th=[28181], 60.00th=[33162], 00:22:23.307 | 70.00th=[33817], 80.00th=[34866], 90.00th=[35914], 95.00th=[36439], 00:22:23.307 | 99.00th=[39584], 99.50th=[44827], 99.90th=[50070], 99.95th=[52167], 00:22:23.307 | 99.99th=[52691] 00:22:23.307 bw ( KiB/s): min=450048, max=608768, per=12.17%, avg=531635.20, stdev=71889.36, samples=20 00:22:23.307 iops : min= 1758, max= 2378, avg=2076.70, stdev=280.82, samples=20 00:22:23.307 lat (msec) : 10=0.09%, 20=0.24%, 50=99.58%, 100=0.09% 00:22:23.307 cpu : usr=0.48%, sys=4.87%, ctx=4288, majf=0, minf=3535 00:22:23.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:23.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.307 issued rwts: total=20830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.307 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.307 job2: (groupid=0, jobs=1): err= 0: pid=581859: Wed Nov 20 12:49:54 2024 00:22:23.307 read: IOPS=2001, BW=500MiB/s (525MB/s)(5023MiB/10036msec) 00:22:23.307 slat (usec): min=6, max=11981, avg=494.05, stdev=1284.35 00:22:23.307 clat (usec): min=1586, max=70653, avg=31431.06, stdev=7147.79 00:22:23.307 lat (usec): min=1639, max=70667, avg=31925.11, stdev=7328.70 00:22:23.307 clat percentiles (usec): 00:22:23.307 | 1.00th=[12911], 5.00th=[16450], 10.00th=[17171], 20.00th=[32113], 00:22:23.307 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[34341], 00:22:23.307 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36439], 95.00th=[37487], 00:22:23.307 | 99.00th=[40633], 99.50th=[43254], 99.90th=[57410], 99.95th=[62129], 00:22:23.307 | 99.99th=[64750] 00:22:23.307 bw ( KiB/s): min=453120, max=952320, per=11.74%, avg=512716.80, stdev=128525.52, samples=20 00:22:23.307 iops : min= 1770, max= 3720, avg=2002.80, stdev=502.05, samples=20 00:22:23.307 lat (msec) : 2=0.06%, 4=0.23%, 10=0.39%, 20=16.23%, 50=82.93% 00:22:23.307 lat (msec) : 100=0.15% 00:22:23.307 cpu : usr=0.42%, sys=4.68%, ctx=4418, majf=0, minf=4097 00:22:23.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:23.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.307 issued rwts: total=20091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.307 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.307 job3: (groupid=0, jobs=1): err= 0: pid=581860: Wed Nov 20 12:49:54 2024 00:22:23.307 read: IOPS=1418, BW=355MiB/s (372MB/s)(3560MiB/10037msec) 00:22:23.307 slat (usec): min=6, max=22268, avg=699.49, stdev=2146.71 00:22:23.307 clat (usec): min=11712, max=78092, avg=44339.56, stdev=5274.53 00:22:23.307 lat (usec): min=12052, max=78102, avg=45039.05, stdev=5657.11 00:22:23.307 clat percentiles (usec): 00:22:23.307 | 1.00th=[37487], 5.00th=[38536], 10.00th=[39060], 20.00th=[39584], 00:22:23.307 | 30.00th=[40633], 40.00th=[42206], 50.00th=[44827], 60.00th=[45876], 00:22:23.307 | 70.00th=[46400], 80.00th=[47449], 90.00th=[50594], 95.00th=[54264], 00:22:23.307 | 99.00th=[58459], 99.50th=[64226], 99.90th=[72877], 99.95th=[72877], 00:22:23.307 | 99.99th=[78119] 00:22:23.307 bw ( KiB/s): min=294912, max=409088, per=8.31%, avg=362913.80, stdev=34108.99, samples=20 00:22:23.307 iops : min= 1152, max= 1598, avg=1417.60, stdev=133.26, samples=20 00:22:23.307 lat (msec) : 20=0.22%, 50=89.11%, 100=10.67% 00:22:23.307 cpu : usr=0.41%, sys=4.65%, ctx=2668, majf=0, minf=4097 00:22:23.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:23.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.307 issued rwts: total=14238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.307 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.308 job4: (groupid=0, jobs=1): err= 0: pid=581861: Wed Nov 20 12:49:54 2024 00:22:23.308 read: IOPS=1280, BW=320MiB/s (336MB/s)(3215MiB/10046msec) 00:22:23.308 slat (usec): min=6, max=19714, avg=775.55, stdev=2136.04 00:22:23.308 clat (usec): min=9630, max=89670, avg=49161.72, stdev=4997.27 00:22:23.308 lat (usec): min=9836, max=89695, avg=49937.27, stdev=5386.62 00:22:23.308 clat percentiles (usec): 00:22:23.308 | 1.00th=[43254], 5.00th=[44303], 10.00th=[44303], 20.00th=[45351], 00:22:23.308 | 30.00th=[45876], 40.00th=[46400], 50.00th=[49546], 60.00th=[51119], 00:22:23.308 | 70.00th=[52167], 80.00th=[52691], 90.00th=[53740], 95.00th=[55837], 00:22:23.308 | 99.00th=[62129], 99.50th=[65274], 99.90th=[83362], 99.95th=[89654], 00:22:23.308 | 99.99th=[89654] 00:22:23.308 bw ( KiB/s): min=297984, max=355328, per=7.50%, avg=327612.50, stdev=23283.13, samples=20 00:22:23.308 iops : min= 1164, max= 1388, avg=1279.70, stdev=90.91, samples=20 00:22:23.308 lat (msec) : 10=0.02%, 20=0.30%, 50=51.32%, 100=48.36% 00:22:23.308 cpu : usr=0.33%, sys=3.37%, ctx=2703, majf=0, minf=4097 00:22:23.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:23.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.308 issued rwts: total=12859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.308 job5: (groupid=0, jobs=1): err= 0: pid=581862: Wed Nov 20 12:49:54 2024 00:22:23.308 read: IOPS=1422, BW=356MiB/s (373MB/s)(3567MiB/10033msec) 00:22:23.308 slat (usec): min=6, max=24121, avg=698.24, stdev=2136.46 00:22:23.308 clat (usec): min=9827, max=76848, avg=44256.72, stdev=5328.06 00:22:23.308 lat (usec): min=10146, max=76859, avg=44954.96, stdev=5725.01 00:22:23.308 clat percentiles (usec): 00:22:23.308 | 1.00th=[37487], 5.00th=[38536], 10.00th=[39060], 20.00th=[39584], 00:22:23.308 | 30.00th=[40633], 40.00th=[41681], 50.00th=[44827], 60.00th=[45876], 00:22:23.308 | 70.00th=[46400], 80.00th=[47449], 90.00th=[50594], 95.00th=[53740], 00:22:23.308 | 99.00th=[58459], 99.50th=[63177], 99.90th=[70779], 99.95th=[71828], 00:22:23.308 | 99.99th=[77071] 00:22:23.308 bw ( KiB/s): min=300544, max=408064, per=8.33%, avg=363639.85, stdev=33249.20, samples=20 00:22:23.308 iops : min= 1174, max= 1594, avg=1420.45, stdev=129.89, samples=20 00:22:23.308 lat (msec) : 10=0.01%, 20=0.34%, 50=89.19%, 100=10.46% 00:22:23.308 cpu : usr=0.37%, sys=3.90%, ctx=2988, majf=0, minf=4097 00:22:23.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:23.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.308 issued rwts: total=14269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.308 job6: (groupid=0, jobs=1): err= 0: pid=581863: Wed Nov 20 12:49:54 2024 00:22:23.308 read: IOPS=1545, BW=386MiB/s (405MB/s)(3877MiB/10031msec) 00:22:23.308 slat (usec): min=6, max=17879, avg=639.81, stdev=1867.24 00:22:23.308 clat (usec): min=10249, max=70307, avg=40724.05, stdev=7719.13 00:22:23.308 lat (usec): min=10461, max=71401, avg=41363.85, stdev=8000.39 00:22:23.308 clat percentiles (usec): 00:22:23.308 | 1.00th=[31065], 5.00th=[32375], 10.00th=[32637], 20.00th=[33424], 00:22:23.308 | 30.00th=[33817], 40.00th=[34866], 50.00th=[39584], 60.00th=[45351], 00:22:23.308 | 70.00th=[46400], 80.00th=[46924], 90.00th=[49546], 95.00th=[53740], 00:22:23.308 | 99.00th=[57934], 99.50th=[60556], 99.90th=[64226], 99.95th=[66847], 00:22:23.308 | 99.99th=[70779] 00:22:23.308 bw ( KiB/s): min=305152, max=479744, per=9.05%, avg=395342.35, stdev=67245.53, samples=20 00:22:23.308 iops : min= 1192, max= 1874, avg=1544.25, stdev=262.72, samples=20 00:22:23.308 lat (msec) : 20=0.34%, 50=89.95%, 100=9.71% 00:22:23.308 cpu : usr=0.37%, sys=3.52%, ctx=3411, majf=0, minf=4097 00:22:23.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:23.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.308 issued rwts: total=15506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.308 job7: (groupid=0, jobs=1): err= 0: pid=581864: Wed Nov 20 12:49:54 2024 00:22:23.308 read: IOPS=1419, BW=355MiB/s (372MB/s)(3561MiB/10037msec) 00:22:23.308 slat (usec): min=6, max=20052, avg=699.61, stdev=2100.86 00:22:23.308 clat (usec): min=13193, max=74273, avg=44320.50, stdev=5267.72 00:22:23.308 lat (usec): min=13702, max=76209, avg=45020.11, stdev=5637.79 00:22:23.308 clat percentiles (usec): 00:22:23.308 | 1.00th=[37487], 5.00th=[38536], 10.00th=[38536], 20.00th=[39584], 00:22:23.308 | 30.00th=[40109], 40.00th=[41681], 50.00th=[44827], 60.00th=[45876], 00:22:23.308 | 70.00th=[46400], 80.00th=[47449], 90.00th=[51119], 95.00th=[54264], 00:22:23.308 | 99.00th=[58459], 99.50th=[62653], 99.90th=[72877], 99.95th=[73925], 00:22:23.308 | 99.99th=[73925] 00:22:23.308 bw ( KiB/s): min=300544, max=410624, per=8.31%, avg=363059.20, stdev=34329.89, samples=20 00:22:23.308 iops : min= 1174, max= 1604, avg=1418.20, stdev=134.10, samples=20 00:22:23.308 lat (msec) : 20=0.15%, 50=88.60%, 100=11.25% 00:22:23.308 cpu : usr=0.27%, sys=3.52%, ctx=2979, majf=0, minf=4097 00:22:23.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:23.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.308 issued rwts: total=14245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.308 job8: (groupid=0, jobs=1): err= 0: pid=581865: Wed Nov 20 12:49:54 2024 00:22:23.308 read: IOPS=1278, BW=320MiB/s (335MB/s)(3210MiB/10042msec) 00:22:23.308 slat (usec): min=6, max=17793, avg=775.65, stdev=2093.30 00:22:23.308 clat (usec): min=10999, max=91498, avg=49219.10, stdev=4890.75 00:22:23.308 lat (usec): min=11335, max=91515, avg=49994.75, stdev=5271.88 00:22:23.308 clat percentiles (usec): 00:22:23.308 | 1.00th=[43254], 5.00th=[44303], 10.00th=[44827], 20.00th=[45351], 00:22:23.308 | 30.00th=[45876], 40.00th=[46400], 50.00th=[49546], 60.00th=[51119], 00:22:23.308 | 70.00th=[52167], 80.00th=[52691], 90.00th=[53740], 95.00th=[55837], 00:22:23.308 | 99.00th=[62653], 99.50th=[66847], 99.90th=[86508], 99.95th=[87557], 00:22:23.308 | 99.99th=[91751] 00:22:23.308 bw ( KiB/s): min=299008, max=354304, per=7.49%, avg=327095.40, stdev=23720.61, samples=20 00:22:23.308 iops : min= 1168, max= 1384, avg=1277.65, stdev=92.67, samples=20 00:22:23.308 lat (msec) : 20=0.23%, 50=51.28%, 100=48.50% 00:22:23.308 cpu : usr=0.52%, sys=3.79%, ctx=2601, majf=0, minf=4097 00:22:23.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:23.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.308 issued rwts: total=12840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.308 job9: (groupid=0, jobs=1): err= 0: pid=581866: Wed Nov 20 12:49:54 2024 00:22:23.308 read: IOPS=2077, BW=519MiB/s (544MB/s)(5205MiB/10024msec) 00:22:23.308 slat (usec): min=6, max=15762, avg=478.51, stdev=1295.97 00:22:23.308 clat (usec): min=10105, max=50882, avg=30309.44, stdev=4451.15 00:22:23.308 lat (usec): min=10317, max=51331, avg=30787.95, stdev=4628.15 00:22:23.308 clat percentiles (usec): 00:22:23.308 | 1.00th=[24511], 5.00th=[25035], 10.00th=[25560], 20.00th=[26084], 00:22:23.308 | 30.00th=[26870], 40.00th=[27395], 50.00th=[28181], 60.00th=[33162], 00:22:23.308 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35914], 95.00th=[36439], 00:22:23.308 | 99.00th=[40109], 99.50th=[43779], 99.90th=[47973], 99.95th=[50070], 00:22:23.308 | 99.99th=[50594] 00:22:23.308 bw ( KiB/s): min=452608, max=611328, per=12.16%, avg=531213.05, stdev=70922.80, samples=20 00:22:23.308 iops : min= 1768, max= 2388, avg=2075.00, stdev=277.03, samples=20 00:22:23.308 lat (msec) : 20=0.20%, 50=99.76%, 100=0.03% 00:22:23.308 cpu : usr=0.36%, sys=4.11%, ctx=4422, majf=0, minf=4097 00:22:23.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:23.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.308 issued rwts: total=20820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.308 job10: (groupid=0, jobs=1): err= 0: pid=581867: Wed Nov 20 12:49:54 2024 00:22:23.308 read: IOPS=1278, BW=320MiB/s (335MB/s)(3210MiB/10045msec) 00:22:23.308 slat (usec): min=6, max=19656, avg=775.71, stdev=2056.77 00:22:23.308 clat (usec): min=10465, max=95102, avg=49221.63, stdev=4921.84 00:22:23.308 lat (usec): min=10696, max=95116, avg=49997.34, stdev=5291.14 00:22:23.308 clat percentiles (usec): 00:22:23.308 | 1.00th=[43254], 5.00th=[44303], 10.00th=[44827], 20.00th=[45351], 00:22:23.308 | 30.00th=[45876], 40.00th=[46924], 50.00th=[49546], 60.00th=[51119], 00:22:23.308 | 70.00th=[52167], 80.00th=[52691], 90.00th=[53740], 95.00th=[55313], 00:22:23.308 | 99.00th=[61080], 99.50th=[65799], 99.90th=[80217], 99.95th=[93848], 00:22:23.308 | 99.99th=[94897] 00:22:23.308 bw ( KiB/s): min=299520, max=352256, per=7.49%, avg=327091.20, stdev=22675.29, samples=20 00:22:23.308 iops : min= 1170, max= 1376, avg=1277.70, stdev=88.58, samples=20 00:22:23.308 lat (msec) : 20=0.28%, 50=50.91%, 100=48.81% 00:22:23.308 cpu : usr=0.51%, sys=3.85%, ctx=2657, majf=0, minf=4097 00:22:23.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:23.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.308 issued rwts: total=12840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.308 00:22:23.308 Run status group 0 (all jobs): 00:22:23.308 READ: bw=4265MiB/s (4472MB/s), 320MiB/s-519MiB/s (335MB/s-544MB/s), io=41.8GiB (44.9GB), run=10024-10046msec 00:22:23.308 00:22:23.308 Disk stats (read/write): 00:22:23.308 nvme0n1: ios=25443/0, merge=0/0, ticks=1229481/0, in_queue=1229481, util=97.33% 00:22:23.308 nvme10n1: ios=41310/0, merge=0/0, ticks=1227271/0, in_queue=1227271, util=97.56% 00:22:23.308 nvme1n1: ios=39899/0, merge=0/0, ticks=1225547/0, in_queue=1225547, util=97.83% 00:22:23.308 nvme2n1: ios=28211/0, merge=0/0, ticks=1225232/0, in_queue=1225232, util=97.93% 00:22:23.308 nvme3n1: ios=25489/0, merge=0/0, ticks=1229374/0, in_queue=1229374, util=97.99% 00:22:23.309 nvme4n1: ios=28256/0, merge=0/0, ticks=1228361/0, in_queue=1228361, util=98.27% 00:22:23.309 nvme5n1: ios=30696/0, merge=0/0, ticks=1225548/0, in_queue=1225548, util=98.41% 00:22:23.309 nvme6n1: ios=28237/0, merge=0/0, ticks=1227222/0, in_queue=1227222, util=98.56% 00:22:23.309 nvme7n1: ios=25435/0, merge=0/0, ticks=1228225/0, in_queue=1228225, util=98.87% 00:22:23.309 nvme8n1: ios=41282/0, merge=0/0, ticks=1225854/0, in_queue=1225854, util=99.02% 00:22:23.309 nvme9n1: ios=25449/0, merge=0/0, ticks=1229111/0, in_queue=1229111, util=99.21% 00:22:23.309 12:49:54 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:23.309 [global] 00:22:23.309 thread=1 00:22:23.309 invalidate=1 00:22:23.309 rw=randwrite 00:22:23.309 time_based=1 00:22:23.309 runtime=10 00:22:23.309 ioengine=libaio 00:22:23.309 direct=1 00:22:23.309 bs=262144 00:22:23.309 iodepth=64 00:22:23.309 norandommap=1 00:22:23.309 numjobs=1 00:22:23.309 00:22:23.309 [job0] 00:22:23.309 filename=/dev/nvme0n1 00:22:23.309 [job1] 00:22:23.309 filename=/dev/nvme10n1 00:22:23.309 [job2] 00:22:23.309 filename=/dev/nvme1n1 00:22:23.309 [job3] 00:22:23.309 filename=/dev/nvme2n1 00:22:23.309 [job4] 00:22:23.309 filename=/dev/nvme3n1 00:22:23.309 [job5] 00:22:23.309 filename=/dev/nvme4n1 00:22:23.309 [job6] 00:22:23.309 filename=/dev/nvme5n1 00:22:23.309 [job7] 00:22:23.309 filename=/dev/nvme6n1 00:22:23.309 [job8] 00:22:23.309 filename=/dev/nvme7n1 00:22:23.309 [job9] 00:22:23.309 filename=/dev/nvme8n1 00:22:23.309 [job10] 00:22:23.309 filename=/dev/nvme9n1 00:22:23.309 Could not set queue depth (nvme0n1) 00:22:23.309 Could not set queue depth (nvme10n1) 00:22:23.309 Could not set queue depth (nvme1n1) 00:22:23.309 Could not set queue depth (nvme2n1) 00:22:23.309 Could not set queue depth (nvme3n1) 00:22:23.309 Could not set queue depth (nvme4n1) 00:22:23.309 Could not set queue depth (nvme5n1) 00:22:23.309 Could not set queue depth (nvme6n1) 00:22:23.309 Could not set queue depth (nvme7n1) 00:22:23.309 Could not set queue depth (nvme8n1) 00:22:23.309 Could not set queue depth (nvme9n1) 00:22:23.309 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.309 fio-3.35 00:22:23.309 Starting 11 threads 00:22:33.316 00:22:33.316 job0: (groupid=0, jobs=1): err= 0: pid=583889: Wed Nov 20 12:50:05 2024 00:22:33.316 write: IOPS=973, BW=243MiB/s (255MB/s)(2440MiB/10029msec); 0 zone resets 00:22:33.316 slat (usec): min=15, max=31150, avg=1006.78, stdev=2122.06 00:22:33.316 clat (msec): min=13, max=112, avg=64.73, stdev=14.19 00:22:33.316 lat (msec): min=14, max=112, avg=65.74, stdev=14.48 00:22:33.316 clat percentiles (msec): 00:22:33.316 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 55], 20.00th=[ 59], 00:22:33.316 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 70], 00:22:33.316 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 78], 95.00th=[ 85], 00:22:33.316 | 99.00th=[ 90], 99.50th=[ 93], 99.90th=[ 103], 99.95th=[ 107], 00:22:33.316 | 99.99th=[ 113] 00:22:33.316 bw ( KiB/s): min=190976, max=395264, per=6.16%, avg=248268.80, stdev=44324.29, samples=20 00:22:33.316 iops : min= 746, max= 1544, avg=969.80, stdev=173.14, samples=20 00:22:33.316 lat (msec) : 20=0.52%, 50=8.39%, 100=90.92%, 250=0.16% 00:22:33.316 cpu : usr=2.29%, sys=3.01%, ctx=2437, majf=0, minf=16 00:22:33.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:33.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.316 issued rwts: total=0,9761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.316 job1: (groupid=0, jobs=1): err= 0: pid=583918: Wed Nov 20 12:50:05 2024 00:22:33.316 write: IOPS=1007, BW=252MiB/s (264MB/s)(2527MiB/10029msec); 0 zone resets 00:22:33.316 slat (usec): min=19, max=24436, avg=970.30, stdev=1775.72 00:22:33.316 clat (msec): min=14, max=106, avg=62.51, stdev= 8.90 00:22:33.316 lat (msec): min=14, max=106, avg=63.48, stdev= 8.96 00:22:33.316 clat percentiles (msec): 00:22:33.316 | 1.00th=[ 37], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:22:33.316 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:22:33.316 | 70.00th=[ 64], 80.00th=[ 68], 90.00th=[ 72], 95.00th=[ 85], 00:22:33.317 | 99.00th=[ 90], 99.50th=[ 92], 99.90th=[ 103], 99.95th=[ 105], 00:22:33.317 | 99.99th=[ 107] 00:22:33.317 bw ( KiB/s): min=187392, max=279552, per=6.38%, avg=257152.00, stdev=25990.92, samples=20 00:22:33.317 iops : min= 732, max= 1092, avg=1004.50, stdev=101.53, samples=20 00:22:33.317 lat (msec) : 20=0.21%, 50=1.28%, 100=98.31%, 250=0.21% 00:22:33.317 cpu : usr=2.40%, sys=3.52%, ctx=2579, majf=0, minf=19 00:22:33.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:33.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.317 issued rwts: total=0,10108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.317 job2: (groupid=0, jobs=1): err= 0: pid=583932: Wed Nov 20 12:50:05 2024 00:22:33.317 write: IOPS=2440, BW=610MiB/s (640MB/s)(6109MiB/10013msec); 0 zone resets 00:22:33.317 slat (usec): min=11, max=22014, avg=401.18, stdev=887.80 00:22:33.317 clat (usec): min=2370, max=88859, avg=25818.29, stdev=14902.75 00:22:33.317 lat (usec): min=2402, max=88900, avg=26219.47, stdev=15121.87 00:22:33.317 clat percentiles (usec): 00:22:33.317 | 1.00th=[17433], 5.00th=[17957], 10.00th=[18220], 20.00th=[18744], 00:22:33.317 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19792], 00:22:33.317 | 70.00th=[20317], 80.00th=[23987], 90.00th=[41157], 95.00th=[67634], 00:22:33.317 | 99.00th=[72877], 99.50th=[74974], 99.90th=[80217], 99.95th=[82314], 00:22:33.317 | 99.99th=[88605] 00:22:33.317 bw ( KiB/s): min=229888, max=842240, per=15.47%, avg=623923.20, stdev=267475.44, samples=20 00:22:33.317 iops : min= 898, max= 3290, avg=2437.20, stdev=1044.83, samples=20 00:22:33.317 lat (msec) : 4=0.04%, 10=0.03%, 20=64.35%, 50=26.15%, 100=9.43% 00:22:33.317 cpu : usr=3.75%, sys=4.10%, ctx=5291, majf=0, minf=400 00:22:33.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:33.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.317 issued rwts: total=0,24435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.317 job3: (groupid=0, jobs=1): err= 0: pid=583940: Wed Nov 20 12:50:05 2024 00:22:33.317 write: IOPS=1631, BW=408MiB/s (428MB/s)(4097MiB/10045msec); 0 zone resets 00:22:33.317 slat (usec): min=8, max=22742, avg=602.52, stdev=1522.35 00:22:33.317 clat (msec): min=6, max=137, avg=38.61, stdev=25.01 00:22:33.317 lat (msec): min=6, max=150, avg=39.22, stdev=25.40 00:22:33.317 clat percentiles (msec): 00:22:33.317 | 1.00th=[ 16], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 18], 00:22:33.317 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 21], 60.00th=[ 39], 00:22:33.317 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 74], 95.00th=[ 80], 00:22:33.317 | 99.00th=[ 89], 99.50th=[ 92], 99.90th=[ 116], 99.95th=[ 127], 00:22:33.317 | 99.99th=[ 138] 00:22:33.317 bw ( KiB/s): min=179200, max=925184, per=10.36%, avg=418027.55, stdev=286513.46, samples=20 00:22:33.317 iops : min= 700, max= 3614, avg=1632.90, stdev=1119.16, samples=20 00:22:33.317 lat (msec) : 10=0.10%, 20=48.14%, 50=20.39%, 100=31.18%, 250=0.20% 00:22:33.317 cpu : usr=2.53%, sys=3.48%, ctx=3590, majf=0, minf=71 00:22:33.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:33.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.317 issued rwts: total=0,16389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.317 job4: (groupid=0, jobs=1): err= 0: pid=583944: Wed Nov 20 12:50:05 2024 00:22:33.317 write: IOPS=1068, BW=267MiB/s (280MB/s)(2681MiB/10034msec); 0 zone resets 00:22:33.317 slat (usec): min=21, max=26242, avg=927.90, stdev=1611.07 00:22:33.317 clat (usec): min=28696, max=92084, avg=58950.90, stdev=7353.29 00:22:33.317 lat (usec): min=28757, max=92134, avg=59878.80, stdev=7354.02 00:22:33.317 clat percentiles (usec): 00:22:33.317 | 1.00th=[41157], 5.00th=[43779], 10.00th=[45351], 20.00th=[55837], 00:22:33.317 | 30.00th=[57410], 40.00th=[58983], 50.00th=[59507], 60.00th=[60031], 00:22:33.317 | 70.00th=[61080], 80.00th=[63701], 90.00th=[67634], 95.00th=[69731], 00:22:33.317 | 99.00th=[74974], 99.50th=[76022], 99.90th=[79168], 99.95th=[81265], 00:22:33.317 | 99.99th=[91751] 00:22:33.317 bw ( KiB/s): min=213504, max=361984, per=6.77%, avg=272870.40, stdev=34855.40, samples=20 00:22:33.317 iops : min= 834, max= 1414, avg=1065.90, stdev=136.15, samples=20 00:22:33.317 lat (msec) : 50=13.06%, 100=86.94% 00:22:33.317 cpu : usr=2.61%, sys=3.66%, ctx=2667, majf=0, minf=73 00:22:33.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:33.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.317 issued rwts: total=0,10722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.317 job5: (groupid=0, jobs=1): err= 0: pid=583960: Wed Nov 20 12:50:05 2024 00:22:33.317 write: IOPS=3738, BW=935MiB/s (980MB/s)(9374MiB/10029msec); 0 zone resets 00:22:33.317 slat (usec): min=9, max=41737, avg=263.35, stdev=675.71 00:22:33.317 clat (usec): min=731, max=104555, avg=16850.95, stdev=11638.51 00:22:33.317 lat (usec): min=802, max=104594, avg=17114.30, stdev=11822.89 00:22:33.317 clat percentiles (usec): 00:22:33.317 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13042], 20.00th=[13435], 00:22:33.317 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:22:33.317 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15270], 95.00th=[61080], 00:22:33.317 | 99.00th=[67634], 99.50th=[68682], 99.90th=[72877], 99.95th=[77071], 00:22:33.317 | 99.99th=[80217] 00:22:33.317 bw ( KiB/s): min=243712, max=1177600, per=23.76%, avg=958284.80, stdev=340300.31, samples=20 00:22:33.317 iops : min= 952, max= 4600, avg=3743.30, stdev=1329.30, samples=20 00:22:33.317 lat (usec) : 750=0.01%, 1000=0.02% 00:22:33.317 lat (msec) : 2=0.07%, 4=0.14%, 10=0.53%, 20=91.57%, 50=2.35% 00:22:33.317 lat (msec) : 100=5.30%, 250=0.01% 00:22:33.317 cpu : usr=4.62%, sys=5.88%, ctx=8761, majf=0, minf=395 00:22:33.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:33.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.317 issued rwts: total=0,37496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.317 job6: (groupid=0, jobs=1): err= 0: pid=583968: Wed Nov 20 12:50:05 2024 00:22:33.317 write: IOPS=926, BW=232MiB/s (243MB/s)(2328MiB/10049msec); 0 zone resets 00:22:33.317 slat (usec): min=19, max=16547, avg=1069.23, stdev=2128.91 00:22:33.317 clat (msec): min=10, max=130, avg=67.98, stdev= 9.07 00:22:33.317 lat (msec): min=10, max=130, avg=69.05, stdev= 9.26 00:22:33.317 clat percentiles (msec): 00:22:33.317 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 61], 00:22:33.317 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 69], 60.00th=[ 71], 00:22:33.317 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 79], 95.00th=[ 85], 00:22:33.317 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 116], 99.95th=[ 126], 00:22:33.317 | 99.99th=[ 131] 00:22:33.317 bw ( KiB/s): min=184320, max=280576, per=5.87%, avg=236774.40, stdev=26222.71, samples=20 00:22:33.317 iops : min= 720, max= 1096, avg=924.90, stdev=102.43, samples=20 00:22:33.317 lat (msec) : 20=0.17%, 50=0.37%, 100=99.26%, 250=0.20% 00:22:33.317 cpu : usr=2.12%, sys=3.12%, ctx=2291, majf=0, minf=9 00:22:33.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:33.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.318 issued rwts: total=0,9312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.318 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.318 job7: (groupid=0, jobs=1): err= 0: pid=583974: Wed Nov 20 12:50:05 2024 00:22:33.318 write: IOPS=1070, BW=268MiB/s (281MB/s)(2685MiB/10035msec); 0 zone resets 00:22:33.318 slat (usec): min=21, max=14035, avg=926.49, stdev=1641.13 00:22:33.318 clat (usec): min=16697, max=80979, avg=58865.29, stdev=7435.94 00:22:33.318 lat (usec): min=16731, max=81025, avg=59791.78, stdev=7431.19 00:22:33.318 clat percentiles (usec): 00:22:33.318 | 1.00th=[41681], 5.00th=[43254], 10.00th=[45351], 20.00th=[55313], 00:22:33.318 | 30.00th=[57934], 40.00th=[58983], 50.00th=[59507], 60.00th=[60031], 00:22:33.318 | 70.00th=[61080], 80.00th=[64226], 90.00th=[67634], 95.00th=[69731], 00:22:33.318 | 99.00th=[73925], 99.50th=[76022], 99.90th=[78119], 99.95th=[79168], 00:22:33.318 | 99.99th=[81265] 00:22:33.318 bw ( KiB/s): min=221184, max=362496, per=6.78%, avg=273280.00, stdev=34249.81, samples=20 00:22:33.318 iops : min= 864, max= 1416, avg=1067.50, stdev=133.79, samples=20 00:22:33.318 lat (msec) : 20=0.07%, 50=12.97%, 100=86.95% 00:22:33.318 cpu : usr=2.60%, sys=3.80%, ctx=2669, majf=0, minf=142 00:22:33.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:33.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.318 issued rwts: total=0,10738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.318 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.318 job8: (groupid=0, jobs=1): err= 0: pid=583990: Wed Nov 20 12:50:05 2024 00:22:33.318 write: IOPS=927, BW=232MiB/s (243MB/s)(2331MiB/10049msec); 0 zone resets 00:22:33.318 slat (usec): min=20, max=16494, avg=1068.18, stdev=2126.90 00:22:33.318 clat (msec): min=10, max=129, avg=67.90, stdev= 9.06 00:22:33.318 lat (msec): min=10, max=129, avg=68.96, stdev= 9.26 00:22:33.318 clat percentiles (msec): 00:22:33.318 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 61], 00:22:33.318 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 71], 00:22:33.318 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 79], 95.00th=[ 85], 00:22:33.318 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 117], 99.95th=[ 117], 00:22:33.318 | 99.99th=[ 130] 00:22:33.318 bw ( KiB/s): min=183296, max=280576, per=5.88%, avg=237056.00, stdev=26158.72, samples=20 00:22:33.318 iops : min= 716, max= 1096, avg=926.00, stdev=102.18, samples=20 00:22:33.318 lat (msec) : 20=0.19%, 50=0.36%, 100=99.28%, 250=0.16% 00:22:33.318 cpu : usr=2.12%, sys=3.19%, ctx=2285, majf=0, minf=270 00:22:33.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:33.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.318 issued rwts: total=0,9323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.318 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.318 job9: (groupid=0, jobs=1): err= 0: pid=583993: Wed Nov 20 12:50:05 2024 00:22:33.318 write: IOPS=1068, BW=267MiB/s (280MB/s)(2682MiB/10035msec); 0 zone resets 00:22:33.318 slat (usec): min=18, max=12892, avg=927.36, stdev=1630.99 00:22:33.318 clat (usec): min=15456, max=80279, avg=58925.69, stdev=7515.24 00:22:33.318 lat (usec): min=15495, max=80373, avg=59853.05, stdev=7521.42 00:22:33.318 clat percentiles (usec): 00:22:33.318 | 1.00th=[41157], 5.00th=[43779], 10.00th=[45351], 20.00th=[55313], 00:22:33.318 | 30.00th=[57934], 40.00th=[58983], 50.00th=[59507], 60.00th=[60031], 00:22:33.318 | 70.00th=[61080], 80.00th=[64226], 90.00th=[68682], 95.00th=[69731], 00:22:33.318 | 99.00th=[74974], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 00:22:33.318 | 99.99th=[80217] 00:22:33.318 bw ( KiB/s): min=221696, max=360960, per=6.77%, avg=272998.40, stdev=33936.07, samples=20 00:22:33.318 iops : min= 866, max= 1410, avg=1066.40, stdev=132.56, samples=20 00:22:33.318 lat (msec) : 20=0.15%, 50=12.90%, 100=86.95% 00:22:33.318 cpu : usr=2.51%, sys=3.78%, ctx=2663, majf=0, minf=142 00:22:33.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:33.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.318 issued rwts: total=0,10727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.318 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.318 job10: (groupid=0, jobs=1): err= 0: pid=583994: Wed Nov 20 12:50:05 2024 00:22:33.318 write: IOPS=927, BW=232MiB/s (243MB/s)(2329MiB/10048msec); 0 zone resets 00:22:33.318 slat (usec): min=22, max=15493, avg=1069.41, stdev=2124.66 00:22:33.318 clat (msec): min=16, max=129, avg=67.94, stdev= 8.89 00:22:33.318 lat (msec): min=16, max=130, avg=69.01, stdev= 9.08 00:22:33.318 clat percentiles (msec): 00:22:33.318 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 61], 00:22:33.318 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 71], 00:22:33.318 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 79], 95.00th=[ 85], 00:22:33.318 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 117], 99.95th=[ 123], 00:22:33.318 | 99.99th=[ 130] 00:22:33.318 bw ( KiB/s): min=185344, max=275968, per=5.87%, avg=236902.40, stdev=25779.43, samples=20 00:22:33.318 iops : min= 724, max= 1078, avg=925.40, stdev=100.70, samples=20 00:22:33.318 lat (msec) : 20=0.08%, 50=0.40%, 100=99.27%, 250=0.26% 00:22:33.318 cpu : usr=2.00%, sys=3.35%, ctx=2280, majf=0, minf=203 00:22:33.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:33.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:33.318 issued rwts: total=0,9317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.318 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:33.318 00:22:33.318 Run status group 0 (all jobs): 00:22:33.318 WRITE: bw=3939MiB/s (4130MB/s), 232MiB/s-935MiB/s (243MB/s-980MB/s), io=38.7GiB (41.5GB), run=10013-10049msec 00:22:33.318 00:22:33.318 Disk stats (read/write): 00:22:33.318 nvme0n1: ios=49/19261, merge=0/0, ticks=13/1219499, in_queue=1219512, util=97.28% 00:22:33.318 nvme10n1: ios=0/19851, merge=0/0, ticks=0/1219056, in_queue=1219056, util=97.42% 00:22:33.318 nvme1n1: ios=0/48315, merge=0/0, ticks=0/1234898, in_queue=1234898, util=97.71% 00:22:33.318 nvme2n1: ios=0/32570, merge=0/0, ticks=0/1223775, in_queue=1223775, util=97.82% 00:22:33.318 nvme3n1: ios=0/21134, merge=0/0, ticks=0/1220333, in_queue=1220333, util=97.88% 00:22:33.318 nvme4n1: ios=0/74439, merge=0/0, ticks=0/1220126, in_queue=1220126, util=98.19% 00:22:33.318 nvme5n1: ios=0/18401, merge=0/0, ticks=0/1216656, in_queue=1216656, util=98.34% 00:22:33.318 nvme6n1: ios=0/21167, merge=0/0, ticks=0/1219478, in_queue=1219478, util=98.44% 00:22:33.318 nvme7n1: ios=0/18416, merge=0/0, ticks=0/1216953, in_queue=1216953, util=98.79% 00:22:33.318 nvme8n1: ios=0/21148, merge=0/0, ticks=0/1222627, in_queue=1222627, util=98.97% 00:22:33.318 nvme9n1: ios=0/18409, merge=0/0, ticks=0/1217403, in_queue=1217403, util=99.09% 00:22:33.318 12:50:05 -- target/multiconnection.sh@36 -- # sync 00:22:33.318 12:50:05 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:33.318 12:50:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.318 12:50:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:33.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:33.890 12:50:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:33.891 12:50:06 -- common/autotest_common.sh@1208 -- # local i=0 00:22:33.891 12:50:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:33.891 12:50:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:22:33.891 12:50:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:33.891 12:50:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:22:33.891 12:50:06 -- common/autotest_common.sh@1220 -- # return 0 00:22:33.891 12:50:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.891 12:50:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.891 12:50:06 -- common/autotest_common.sh@10 -- # set +x 00:22:33.891 12:50:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.891 12:50:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.891 12:50:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:34.832 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:34.832 12:50:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:34.832 12:50:07 -- common/autotest_common.sh@1208 -- # local i=0 00:22:34.833 12:50:07 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:34.833 12:50:07 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:22:34.833 12:50:07 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:34.833 12:50:07 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:22:35.094 12:50:07 -- common/autotest_common.sh@1220 -- # return 0 00:22:35.094 12:50:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:35.094 12:50:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.094 12:50:07 -- common/autotest_common.sh@10 -- # set +x 00:22:35.094 12:50:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.094 12:50:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.094 12:50:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:36.481 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:36.481 12:50:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:36.481 12:50:09 -- common/autotest_common.sh@1208 -- # local i=0 00:22:36.481 12:50:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:36.481 12:50:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:22:36.481 12:50:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:36.481 12:50:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:22:36.481 12:50:09 -- common/autotest_common.sh@1220 -- # return 0 00:22:36.481 12:50:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:36.481 12:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.481 12:50:09 -- common/autotest_common.sh@10 -- # set +x 00:22:36.481 12:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.481 12:50:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.481 12:50:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:37.868 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:37.868 12:50:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:37.868 12:50:10 -- common/autotest_common.sh@1208 -- # local i=0 00:22:37.868 12:50:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:37.868 12:50:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:22:37.868 12:50:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:37.868 12:50:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:22:37.868 12:50:10 -- common/autotest_common.sh@1220 -- # return 0 00:22:37.868 12:50:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:37.868 12:50:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.868 12:50:10 -- common/autotest_common.sh@10 -- # set +x 00:22:37.868 12:50:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.868 12:50:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:37.868 12:50:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:38.812 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:38.812 12:50:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:38.812 12:50:11 -- common/autotest_common.sh@1208 -- # local i=0 00:22:38.812 12:50:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:38.812 12:50:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:22:38.812 12:50:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:38.812 12:50:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:22:38.812 12:50:11 -- common/autotest_common.sh@1220 -- # return 0 00:22:38.812 12:50:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:38.812 12:50:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.812 12:50:11 -- common/autotest_common.sh@10 -- # set +x 00:22:38.812 12:50:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.812 12:50:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.812 12:50:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:40.197 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:40.197 12:50:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:40.197 12:50:13 -- common/autotest_common.sh@1208 -- # local i=0 00:22:40.197 12:50:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:40.197 12:50:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:22:40.197 12:50:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:40.197 12:50:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:22:40.197 12:50:13 -- common/autotest_common.sh@1220 -- # return 0 00:22:40.197 12:50:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:40.197 12:50:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.197 12:50:13 -- common/autotest_common.sh@10 -- # set +x 00:22:40.197 12:50:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.197 12:50:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:40.197 12:50:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:41.582 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:41.582 12:50:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:41.582 12:50:14 -- common/autotest_common.sh@1208 -- # local i=0 00:22:41.582 12:50:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:41.582 12:50:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:22:41.582 12:50:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:41.582 12:50:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:22:41.582 12:50:14 -- common/autotest_common.sh@1220 -- # return 0 00:22:41.582 12:50:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:41.582 12:50:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.582 12:50:14 -- common/autotest_common.sh@10 -- # set +x 00:22:41.582 12:50:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.582 12:50:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:41.582 12:50:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:42.968 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:42.968 12:50:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:42.968 12:50:15 -- common/autotest_common.sh@1208 -- # local i=0 00:22:42.968 12:50:15 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:42.968 12:50:15 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:22:42.968 12:50:15 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:22:42.968 12:50:15 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:42.968 12:50:15 -- common/autotest_common.sh@1220 -- # return 0 00:22:42.968 12:50:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:42.968 12:50:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.968 12:50:15 -- common/autotest_common.sh@10 -- # set +x 00:22:42.968 12:50:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.968 12:50:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.968 12:50:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:44.352 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:44.352 12:50:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:44.352 12:50:17 -- common/autotest_common.sh@1208 -- # local i=0 00:22:44.352 12:50:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:44.352 12:50:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:22:44.352 12:50:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:44.352 12:50:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:22:44.352 12:50:17 -- common/autotest_common.sh@1220 -- # return 0 00:22:44.352 12:50:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:44.352 12:50:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.352 12:50:17 -- common/autotest_common.sh@10 -- # set +x 00:22:44.352 12:50:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.352 12:50:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.352 12:50:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:45.735 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:45.735 12:50:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:45.735 12:50:18 -- common/autotest_common.sh@1208 -- # local i=0 00:22:45.735 12:50:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:45.735 12:50:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:22:45.735 12:50:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:45.735 12:50:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:45.735 12:50:18 -- common/autotest_common.sh@1220 -- # return 0 00:22:45.735 12:50:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:45.735 12:50:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.735 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:22:45.735 12:50:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.735 12:50:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:45.735 12:50:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:47.117 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:47.117 12:50:20 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:47.117 12:50:20 -- common/autotest_common.sh@1208 -- # local i=0 00:22:47.117 12:50:20 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:47.117 12:50:20 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:22:47.117 12:50:20 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:47.117 12:50:20 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:47.117 12:50:20 -- common/autotest_common.sh@1220 -- # return 0 00:22:47.117 12:50:20 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:47.117 12:50:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.117 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:22:47.117 12:50:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.117 12:50:20 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:47.117 12:50:20 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:47.117 12:50:20 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:47.117 12:50:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:47.117 12:50:20 -- nvmf/common.sh@116 -- # sync 00:22:47.117 12:50:20 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:47.117 12:50:20 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:47.117 12:50:20 -- nvmf/common.sh@119 -- # set +e 00:22:47.117 12:50:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:47.117 12:50:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:47.117 rmmod nvme_rdma 00:22:47.117 rmmod nvme_fabrics 00:22:47.117 12:50:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:47.117 12:50:20 -- nvmf/common.sh@123 -- # set -e 00:22:47.117 12:50:20 -- nvmf/common.sh@124 -- # return 0 00:22:47.117 12:50:20 -- nvmf/common.sh@477 -- # '[' -n 573357 ']' 00:22:47.117 12:50:20 -- nvmf/common.sh@478 -- # killprocess 573357 00:22:47.117 12:50:20 -- common/autotest_common.sh@936 -- # '[' -z 573357 ']' 00:22:47.117 12:50:20 -- common/autotest_common.sh@940 -- # kill -0 573357 00:22:47.117 12:50:20 -- common/autotest_common.sh@941 -- # uname 00:22:47.117 12:50:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.117 12:50:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 573357 00:22:47.378 12:50:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:47.378 12:50:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:47.378 12:50:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 573357' 00:22:47.378 killing process with pid 573357 00:22:47.378 12:50:20 -- common/autotest_common.sh@955 -- # kill 573357 00:22:47.378 12:50:20 -- common/autotest_common.sh@960 -- # wait 573357 00:22:47.638 12:50:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:47.638 12:50:20 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:47.638 00:22:47.638 real 1m24.830s 00:22:47.638 user 5m41.980s 00:22:47.638 sys 0m17.332s 00:22:47.638 12:50:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:47.638 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:22:47.638 ************************************ 00:22:47.638 END TEST nvmf_multiconnection 00:22:47.638 ************************************ 00:22:47.638 12:50:20 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:47.638 12:50:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:47.638 12:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:47.638 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:22:47.638 ************************************ 00:22:47.638 START TEST nvmf_initiator_timeout 00:22:47.638 ************************************ 00:22:47.638 12:50:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:47.900 * Looking for test storage... 00:22:47.900 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:47.900 12:50:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:47.900 12:50:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:47.900 12:50:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:47.900 12:50:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:47.900 12:50:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:47.900 12:50:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:47.900 12:50:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:47.900 12:50:20 -- scripts/common.sh@335 -- # IFS=.-: 00:22:47.900 12:50:20 -- scripts/common.sh@335 -- # read -ra ver1 00:22:47.900 12:50:20 -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.900 12:50:20 -- scripts/common.sh@336 -- # read -ra ver2 00:22:47.900 12:50:20 -- scripts/common.sh@337 -- # local 'op=<' 00:22:47.900 12:50:20 -- scripts/common.sh@339 -- # ver1_l=2 00:22:47.900 12:50:20 -- scripts/common.sh@340 -- # ver2_l=1 00:22:47.900 12:50:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:47.900 12:50:20 -- scripts/common.sh@343 -- # case "$op" in 00:22:47.900 12:50:20 -- scripts/common.sh@344 -- # : 1 00:22:47.900 12:50:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:47.900 12:50:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.900 12:50:20 -- scripts/common.sh@364 -- # decimal 1 00:22:47.900 12:50:20 -- scripts/common.sh@352 -- # local d=1 00:22:47.900 12:50:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.900 12:50:20 -- scripts/common.sh@354 -- # echo 1 00:22:47.900 12:50:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:47.900 12:50:20 -- scripts/common.sh@365 -- # decimal 2 00:22:47.900 12:50:20 -- scripts/common.sh@352 -- # local d=2 00:22:47.900 12:50:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.900 12:50:20 -- scripts/common.sh@354 -- # echo 2 00:22:47.900 12:50:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:47.900 12:50:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:47.900 12:50:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:47.900 12:50:20 -- scripts/common.sh@367 -- # return 0 00:22:47.900 12:50:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.900 12:50:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:47.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.900 --rc genhtml_branch_coverage=1 00:22:47.900 --rc genhtml_function_coverage=1 00:22:47.900 --rc genhtml_legend=1 00:22:47.900 --rc geninfo_all_blocks=1 00:22:47.900 --rc geninfo_unexecuted_blocks=1 00:22:47.900 00:22:47.900 ' 00:22:47.900 12:50:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:47.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.900 --rc genhtml_branch_coverage=1 00:22:47.900 --rc genhtml_function_coverage=1 00:22:47.900 --rc genhtml_legend=1 00:22:47.900 --rc geninfo_all_blocks=1 00:22:47.900 --rc geninfo_unexecuted_blocks=1 00:22:47.901 00:22:47.901 ' 00:22:47.901 12:50:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:47.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.901 --rc genhtml_branch_coverage=1 00:22:47.901 --rc genhtml_function_coverage=1 00:22:47.901 --rc genhtml_legend=1 00:22:47.901 --rc geninfo_all_blocks=1 00:22:47.901 --rc geninfo_unexecuted_blocks=1 00:22:47.901 00:22:47.901 ' 00:22:47.901 12:50:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:47.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.901 --rc genhtml_branch_coverage=1 00:22:47.901 --rc genhtml_function_coverage=1 00:22:47.901 --rc genhtml_legend=1 00:22:47.901 --rc geninfo_all_blocks=1 00:22:47.901 --rc geninfo_unexecuted_blocks=1 00:22:47.901 00:22:47.901 ' 00:22:47.901 12:50:20 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.901 12:50:20 -- nvmf/common.sh@7 -- # uname -s 00:22:47.901 12:50:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.901 12:50:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.901 12:50:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.901 12:50:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.901 12:50:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.901 12:50:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.901 12:50:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.901 12:50:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.901 12:50:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.901 12:50:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.901 12:50:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:47.901 12:50:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:47.901 12:50:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.901 12:50:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.901 12:50:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.901 12:50:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:47.901 12:50:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.901 12:50:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.901 12:50:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.901 12:50:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.901 12:50:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.901 12:50:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.901 12:50:20 -- paths/export.sh@5 -- # export PATH 00:22:47.901 12:50:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.901 12:50:20 -- nvmf/common.sh@46 -- # : 0 00:22:47.901 12:50:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:47.901 12:50:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:47.901 12:50:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:47.901 12:50:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.901 12:50:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.901 12:50:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:47.901 12:50:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:47.901 12:50:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:47.901 12:50:20 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.901 12:50:20 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.901 12:50:20 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:47.901 12:50:20 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:47.901 12:50:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.901 12:50:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:47.901 12:50:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:47.901 12:50:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:47.901 12:50:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.901 12:50:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.901 12:50:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.901 12:50:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:47.901 12:50:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:47.901 12:50:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:47.901 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:22:56.049 12:50:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:56.049 12:50:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:56.049 12:50:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:56.049 12:50:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:56.049 12:50:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:56.049 12:50:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:56.049 12:50:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:56.049 12:50:27 -- nvmf/common.sh@294 -- # net_devs=() 00:22:56.049 12:50:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:56.049 12:50:27 -- nvmf/common.sh@295 -- # e810=() 00:22:56.049 12:50:27 -- nvmf/common.sh@295 -- # local -ga e810 00:22:56.049 12:50:27 -- nvmf/common.sh@296 -- # x722=() 00:22:56.049 12:50:27 -- nvmf/common.sh@296 -- # local -ga x722 00:22:56.049 12:50:27 -- nvmf/common.sh@297 -- # mlx=() 00:22:56.049 12:50:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:56.049 12:50:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.049 12:50:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:56.049 12:50:27 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:56.049 12:50:27 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:56.049 12:50:27 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:56.049 12:50:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:56.049 12:50:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:56.049 12:50:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:56.049 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:56.049 12:50:27 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:56.049 12:50:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:56.049 12:50:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:56.049 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:56.049 12:50:27 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:56.049 12:50:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:56.049 12:50:27 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:56.049 12:50:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.049 12:50:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:56.049 12:50:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.049 12:50:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:56.049 Found net devices under 0000:98:00.0: mlx_0_0 00:22:56.049 12:50:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.049 12:50:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:56.049 12:50:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.049 12:50:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:56.049 12:50:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.049 12:50:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:56.049 Found net devices under 0000:98:00.1: mlx_0_1 00:22:56.049 12:50:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.049 12:50:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:56.049 12:50:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:56.049 12:50:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:56.049 12:50:27 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:56.049 12:50:27 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:56.049 12:50:27 -- nvmf/common.sh@57 -- # uname 00:22:56.049 12:50:27 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:56.049 12:50:27 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:56.049 12:50:27 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:56.049 12:50:27 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:56.049 12:50:27 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:56.049 12:50:27 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:56.049 12:50:27 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:56.049 12:50:27 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:56.049 12:50:27 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:56.049 12:50:27 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:56.049 12:50:27 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:56.049 12:50:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:56.050 12:50:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:56.050 12:50:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:56.050 12:50:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:56.050 12:50:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:56.050 12:50:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:56.050 12:50:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.050 12:50:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:56.050 12:50:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:56.050 12:50:27 -- nvmf/common.sh@104 -- # continue 2 00:22:56.050 12:50:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:56.050 12:50:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.050 12:50:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:56.050 12:50:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.050 12:50:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:56.050 12:50:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:56.050 12:50:27 -- nvmf/common.sh@104 -- # continue 2 00:22:56.050 12:50:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:56.050 12:50:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:56.050 12:50:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:56.050 12:50:27 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:56.050 12:50:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:56.050 12:50:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:56.050 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:56.050 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:22:56.050 altname enp152s0f0np0 00:22:56.050 altname ens817f0np0 00:22:56.050 inet 192.168.100.8/24 scope global mlx_0_0 00:22:56.050 valid_lft forever preferred_lft forever 00:22:56.050 12:50:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:56.050 12:50:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:56.050 12:50:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:56.050 12:50:27 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:56.050 12:50:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:56.050 12:50:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:56.050 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:56.050 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:22:56.050 altname enp152s0f1np1 00:22:56.050 altname ens817f1np1 00:22:56.050 inet 192.168.100.9/24 scope global mlx_0_1 00:22:56.050 valid_lft forever preferred_lft forever 00:22:56.050 12:50:27 -- nvmf/common.sh@410 -- # return 0 00:22:56.050 12:50:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:56.050 12:50:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:56.050 12:50:27 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:56.050 12:50:27 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:56.050 12:50:27 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:56.050 12:50:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:56.050 12:50:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:56.050 12:50:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:56.050 12:50:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:56.050 12:50:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:56.050 12:50:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:56.050 12:50:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.050 12:50:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:56.050 12:50:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:56.050 12:50:27 -- nvmf/common.sh@104 -- # continue 2 00:22:56.050 12:50:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:56.050 12:50:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.050 12:50:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:56.050 12:50:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.050 12:50:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:56.050 12:50:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:56.050 12:50:27 -- nvmf/common.sh@104 -- # continue 2 00:22:56.050 12:50:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:56.050 12:50:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:56.050 12:50:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:56.050 12:50:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:56.050 12:50:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:56.050 12:50:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:56.050 12:50:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:56.050 12:50:27 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:56.050 192.168.100.9' 00:22:56.050 12:50:27 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:56.050 192.168.100.9' 00:22:56.050 12:50:27 -- nvmf/common.sh@445 -- # head -n 1 00:22:56.050 12:50:27 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:56.050 12:50:27 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:56.050 192.168.100.9' 00:22:56.050 12:50:27 -- nvmf/common.sh@446 -- # head -n 1 00:22:56.050 12:50:27 -- nvmf/common.sh@446 -- # tail -n +2 00:22:56.050 12:50:27 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:56.050 12:50:27 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:56.050 12:50:27 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:56.050 12:50:27 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:56.050 12:50:27 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:56.050 12:50:27 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:56.050 12:50:28 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:56.050 12:50:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:56.050 12:50:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:56.050 12:50:28 -- common/autotest_common.sh@10 -- # set +x 00:22:56.050 12:50:28 -- nvmf/common.sh@469 -- # nvmfpid=592476 00:22:56.050 12:50:28 -- nvmf/common.sh@470 -- # waitforlisten 592476 00:22:56.050 12:50:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:56.050 12:50:28 -- common/autotest_common.sh@829 -- # '[' -z 592476 ']' 00:22:56.050 12:50:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.050 12:50:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.050 12:50:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.050 12:50:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.050 12:50:28 -- common/autotest_common.sh@10 -- # set +x 00:22:56.050 [2024-11-20 12:50:28.073512] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:56.050 [2024-11-20 12:50:28.073564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.050 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.050 [2024-11-20 12:50:28.135700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.050 [2024-11-20 12:50:28.199372] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:56.050 [2024-11-20 12:50:28.199511] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.050 [2024-11-20 12:50:28.199522] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.050 [2024-11-20 12:50:28.199530] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.051 [2024-11-20 12:50:28.199706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.051 [2024-11-20 12:50:28.199820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.051 [2024-11-20 12:50:28.199975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.051 [2024-11-20 12:50:28.199976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.051 12:50:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.051 12:50:28 -- common/autotest_common.sh@862 -- # return 0 00:22:56.051 12:50:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:56.051 12:50:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:56.051 12:50:28 -- common/autotest_common.sh@10 -- # set +x 00:22:56.051 12:50:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.051 12:50:28 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:56.051 12:50:28 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:56.051 12:50:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.051 12:50:28 -- common/autotest_common.sh@10 -- # set +x 00:22:56.051 Malloc0 00:22:56.051 12:50:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.051 12:50:28 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:56.051 12:50:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.051 12:50:28 -- common/autotest_common.sh@10 -- # set +x 00:22:56.051 Delay0 00:22:56.051 12:50:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.051 12:50:28 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:56.051 12:50:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.051 12:50:28 -- common/autotest_common.sh@10 -- # set +x 00:22:56.051 [2024-11-20 12:50:28.968972] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ffa770/0x2004bc0) succeed. 00:22:56.051 [2024-11-20 12:50:28.983680] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ffbd60/0x2046260) succeed. 00:22:56.051 12:50:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.051 12:50:29 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:56.051 12:50:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.051 12:50:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.051 12:50:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.051 12:50:29 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:56.051 12:50:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.051 12:50:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.051 12:50:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.051 12:50:29 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:56.051 12:50:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.051 12:50:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.051 [2024-11-20 12:50:29.140387] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:56.051 12:50:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.051 12:50:29 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:57.965 12:50:30 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:57.965 12:50:30 -- common/autotest_common.sh@1187 -- # local i=0 00:22:57.965 12:50:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:57.965 12:50:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:57.965 12:50:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:59.873 12:50:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:59.873 12:50:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:59.873 12:50:32 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:22:59.873 12:50:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:59.873 12:50:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:59.873 12:50:32 -- common/autotest_common.sh@1197 -- # return 0 00:22:59.873 12:50:32 -- target/initiator_timeout.sh@35 -- # fio_pid=593307 00:22:59.873 12:50:32 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:59.873 12:50:32 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:59.873 [global] 00:22:59.873 thread=1 00:22:59.873 invalidate=1 00:22:59.873 rw=write 00:22:59.873 time_based=1 00:22:59.873 runtime=60 00:22:59.873 ioengine=libaio 00:22:59.873 direct=1 00:22:59.873 bs=4096 00:22:59.873 iodepth=1 00:22:59.873 norandommap=0 00:22:59.873 numjobs=1 00:22:59.873 00:22:59.873 verify_dump=1 00:22:59.873 verify_backlog=512 00:22:59.873 verify_state_save=0 00:22:59.873 do_verify=1 00:22:59.873 verify=crc32c-intel 00:22:59.873 [job0] 00:22:59.873 filename=/dev/nvme0n1 00:22:59.873 Could not set queue depth (nvme0n1) 00:23:00.133 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:00.133 fio-3.35 00:23:00.133 Starting 1 thread 00:23:02.675 12:50:35 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:02.675 12:50:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.675 12:50:35 -- common/autotest_common.sh@10 -- # set +x 00:23:02.675 true 00:23:02.675 12:50:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.675 12:50:35 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:02.675 12:50:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.675 12:50:35 -- common/autotest_common.sh@10 -- # set +x 00:23:02.675 true 00:23:02.675 12:50:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.675 12:50:35 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:02.675 12:50:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.675 12:50:35 -- common/autotest_common.sh@10 -- # set +x 00:23:02.675 true 00:23:02.675 12:50:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.675 12:50:35 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:02.675 12:50:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.675 12:50:35 -- common/autotest_common.sh@10 -- # set +x 00:23:02.675 true 00:23:02.675 12:50:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.675 12:50:35 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:06.047 12:50:38 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:06.047 12:50:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.047 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:23:06.047 true 00:23:06.047 12:50:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.047 12:50:38 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:06.047 12:50:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.047 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:23:06.047 true 00:23:06.047 12:50:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.047 12:50:38 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:06.047 12:50:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.047 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:23:06.047 true 00:23:06.047 12:50:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.047 12:50:38 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:06.047 12:50:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.047 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:23:06.047 true 00:23:06.047 12:50:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.047 12:50:38 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:06.047 12:50:38 -- target/initiator_timeout.sh@54 -- # wait 593307 00:24:02.767 00:24:02.767 job0: (groupid=0, jobs=1): err= 0: pid=593624: Wed Nov 20 12:51:33 2024 00:24:02.767 read: IOPS=597, BW=2389KiB/s (2447kB/s)(140MiB/60000msec) 00:24:02.767 slat (usec): min=5, max=12296, avg=21.37, stdev=75.38 00:24:02.767 clat (usec): min=28, max=43244k, avg=1411.74, stdev=228422.28 00:24:02.767 lat (usec): min=87, max=43244k, avg=1433.11, stdev=228422.25 00:24:02.767 clat percentiles (usec): 00:24:02.767 | 1.00th=[ 89], 5.00th=[ 101], 10.00th=[ 110], 20.00th=[ 141], 00:24:02.767 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 208], 60.00th=[ 215], 00:24:02.767 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 289], 95.00th=[ 318], 00:24:02.767 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 433], 99.95th=[ 445], 00:24:02.767 | 99.99th=[ 515] 00:24:02.767 write: IOPS=601, BW=2405KiB/s (2462kB/s)(141MiB/60000msec); 0 zone resets 00:24:02.767 slat (usec): min=7, max=793, avg=24.53, stdev=13.95 00:24:02.767 clat (usec): min=27, max=524, avg=202.97, stdev=62.78 00:24:02.767 lat (usec): min=88, max=821, avg=227.50, stdev=65.09 00:24:02.767 clat percentiles (usec): 00:24:02.767 | 1.00th=[ 88], 5.00th=[ 100], 10.00th=[ 112], 20.00th=[ 133], 00:24:02.767 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 215], 00:24:02.767 | 70.00th=[ 225], 80.00th=[ 241], 90.00th=[ 285], 95.00th=[ 314], 00:24:02.767 | 99.00th=[ 367], 99.50th=[ 388], 99.90th=[ 429], 99.95th=[ 437], 00:24:02.767 | 99.99th=[ 457] 00:24:02.767 bw ( KiB/s): min= 1648, max=12288, per=100.00%, avg=8261.88, stdev=1667.57, samples=34 00:24:02.767 iops : min= 412, max= 3072, avg=2065.47, stdev=416.89, samples=34 00:24:02.767 lat (usec) : 50=0.01%, 100=4.95%, 250=78.14%, 500=16.90%, 750=0.01% 00:24:02.767 lat (msec) : >=2000=0.01% 00:24:02.767 cpu : usr=1.95%, sys=3.56%, ctx=71917, majf=0, minf=212 00:24:02.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:02.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.767 issued rwts: total=35840,36071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:02.767 00:24:02.767 Run status group 0 (all jobs): 00:24:02.767 READ: bw=2389KiB/s (2447kB/s), 2389KiB/s-2389KiB/s (2447kB/s-2447kB/s), io=140MiB (147MB), run=60000-60000msec 00:24:02.767 WRITE: bw=2405KiB/s (2462kB/s), 2405KiB/s-2405KiB/s (2462kB/s-2462kB/s), io=141MiB (148MB), run=60000-60000msec 00:24:02.767 00:24:02.767 Disk stats (read/write): 00:24:02.767 nvme0n1: ios=35848/35840, merge=0/0, ticks=5174/5113, in_queue=10287, util=99.66% 00:24:02.767 12:51:33 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:02.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:02.767 12:51:34 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:02.767 12:51:34 -- common/autotest_common.sh@1208 -- # local i=0 00:24:02.767 12:51:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:24:02.767 12:51:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:02.767 12:51:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:24:02.767 12:51:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:02.767 12:51:34 -- common/autotest_common.sh@1220 -- # return 0 00:24:02.767 12:51:34 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:02.767 12:51:34 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:02.767 nvmf hotplug test: fio successful as expected 00:24:02.767 12:51:34 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.767 12:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.767 12:51:34 -- common/autotest_common.sh@10 -- # set +x 00:24:02.767 12:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.767 12:51:34 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:02.767 12:51:34 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:02.767 12:51:34 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:02.767 12:51:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:02.767 12:51:34 -- nvmf/common.sh@116 -- # sync 00:24:02.767 12:51:34 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:02.767 12:51:34 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:02.767 12:51:34 -- nvmf/common.sh@119 -- # set +e 00:24:02.767 12:51:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:02.767 12:51:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:02.767 rmmod nvme_rdma 00:24:02.767 rmmod nvme_fabrics 00:24:02.767 12:51:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:02.767 12:51:34 -- nvmf/common.sh@123 -- # set -e 00:24:02.767 12:51:34 -- nvmf/common.sh@124 -- # return 0 00:24:02.767 12:51:34 -- nvmf/common.sh@477 -- # '[' -n 592476 ']' 00:24:02.767 12:51:34 -- nvmf/common.sh@478 -- # killprocess 592476 00:24:02.767 12:51:34 -- common/autotest_common.sh@936 -- # '[' -z 592476 ']' 00:24:02.767 12:51:34 -- common/autotest_common.sh@940 -- # kill -0 592476 00:24:02.767 12:51:34 -- common/autotest_common.sh@941 -- # uname 00:24:02.767 12:51:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:02.767 12:51:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 592476 00:24:02.767 12:51:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:02.767 12:51:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:02.767 12:51:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 592476' 00:24:02.767 killing process with pid 592476 00:24:02.767 12:51:34 -- common/autotest_common.sh@955 -- # kill 592476 00:24:02.767 12:51:34 -- common/autotest_common.sh@960 -- # wait 592476 00:24:02.767 12:51:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:02.767 12:51:34 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:02.767 00:24:02.767 real 1m14.221s 00:24:02.767 user 4m45.015s 00:24:02.767 sys 0m7.857s 00:24:02.767 12:51:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:02.767 12:51:34 -- common/autotest_common.sh@10 -- # set +x 00:24:02.767 ************************************ 00:24:02.767 END TEST nvmf_initiator_timeout 00:24:02.767 ************************************ 00:24:02.767 12:51:34 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:02.767 12:51:34 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:24:02.767 12:51:34 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:24:02.767 12:51:34 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:24:02.767 12:51:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:02.767 12:51:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.767 12:51:34 -- common/autotest_common.sh@10 -- # set +x 00:24:02.767 ************************************ 00:24:02.767 START TEST nvmf_shutdown 00:24:02.767 ************************************ 00:24:02.767 12:51:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:24:02.767 * Looking for test storage... 00:24:02.767 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:02.767 12:51:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:02.767 12:51:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:02.767 12:51:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:02.767 12:51:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:02.767 12:51:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:02.767 12:51:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:02.767 12:51:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:02.767 12:51:35 -- scripts/common.sh@335 -- # IFS=.-: 00:24:02.767 12:51:35 -- scripts/common.sh@335 -- # read -ra ver1 00:24:02.767 12:51:35 -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.767 12:51:35 -- scripts/common.sh@336 -- # read -ra ver2 00:24:02.768 12:51:35 -- scripts/common.sh@337 -- # local 'op=<' 00:24:02.768 12:51:35 -- scripts/common.sh@339 -- # ver1_l=2 00:24:02.768 12:51:35 -- scripts/common.sh@340 -- # ver2_l=1 00:24:02.768 12:51:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:02.768 12:51:35 -- scripts/common.sh@343 -- # case "$op" in 00:24:02.768 12:51:35 -- scripts/common.sh@344 -- # : 1 00:24:02.768 12:51:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:02.768 12:51:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.768 12:51:35 -- scripts/common.sh@364 -- # decimal 1 00:24:02.768 12:51:35 -- scripts/common.sh@352 -- # local d=1 00:24:02.768 12:51:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.768 12:51:35 -- scripts/common.sh@354 -- # echo 1 00:24:02.768 12:51:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:02.768 12:51:35 -- scripts/common.sh@365 -- # decimal 2 00:24:02.768 12:51:35 -- scripts/common.sh@352 -- # local d=2 00:24:02.768 12:51:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.768 12:51:35 -- scripts/common.sh@354 -- # echo 2 00:24:02.768 12:51:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:02.768 12:51:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:02.768 12:51:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:02.768 12:51:35 -- scripts/common.sh@367 -- # return 0 00:24:02.768 12:51:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.768 12:51:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:02.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.768 --rc genhtml_branch_coverage=1 00:24:02.768 --rc genhtml_function_coverage=1 00:24:02.768 --rc genhtml_legend=1 00:24:02.768 --rc geninfo_all_blocks=1 00:24:02.768 --rc geninfo_unexecuted_blocks=1 00:24:02.768 00:24:02.768 ' 00:24:02.768 12:51:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:02.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.768 --rc genhtml_branch_coverage=1 00:24:02.768 --rc genhtml_function_coverage=1 00:24:02.768 --rc genhtml_legend=1 00:24:02.768 --rc geninfo_all_blocks=1 00:24:02.768 --rc geninfo_unexecuted_blocks=1 00:24:02.768 00:24:02.768 ' 00:24:02.768 12:51:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:02.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.768 --rc genhtml_branch_coverage=1 00:24:02.768 --rc genhtml_function_coverage=1 00:24:02.768 --rc genhtml_legend=1 00:24:02.768 --rc geninfo_all_blocks=1 00:24:02.768 --rc geninfo_unexecuted_blocks=1 00:24:02.768 00:24:02.768 ' 00:24:02.768 12:51:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:02.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.768 --rc genhtml_branch_coverage=1 00:24:02.768 --rc genhtml_function_coverage=1 00:24:02.768 --rc genhtml_legend=1 00:24:02.768 --rc geninfo_all_blocks=1 00:24:02.768 --rc geninfo_unexecuted_blocks=1 00:24:02.768 00:24:02.768 ' 00:24:02.768 12:51:35 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.768 12:51:35 -- nvmf/common.sh@7 -- # uname -s 00:24:02.768 12:51:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.768 12:51:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.768 12:51:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.768 12:51:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.768 12:51:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.768 12:51:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.768 12:51:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.768 12:51:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.768 12:51:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.768 12:51:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.768 12:51:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:02.768 12:51:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:02.768 12:51:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.768 12:51:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.768 12:51:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.768 12:51:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:02.768 12:51:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.768 12:51:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.768 12:51:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.768 12:51:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.768 12:51:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.768 12:51:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.768 12:51:35 -- paths/export.sh@5 -- # export PATH 00:24:02.768 12:51:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.768 12:51:35 -- nvmf/common.sh@46 -- # : 0 00:24:02.768 12:51:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.768 12:51:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.768 12:51:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.768 12:51:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.768 12:51:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.768 12:51:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.768 12:51:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.768 12:51:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.768 12:51:35 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:02.768 12:51:35 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:02.768 12:51:35 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:02.768 12:51:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:02.768 12:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.768 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:24:02.768 ************************************ 00:24:02.768 START TEST nvmf_shutdown_tc1 00:24:02.768 ************************************ 00:24:02.768 12:51:35 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc1 00:24:02.768 12:51:35 -- target/shutdown.sh@74 -- # starttarget 00:24:02.768 12:51:35 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:02.768 12:51:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:02.768 12:51:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.768 12:51:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.768 12:51:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.768 12:51:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.768 12:51:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.768 12:51:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.768 12:51:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.768 12:51:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:02.768 12:51:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:02.768 12:51:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:02.768 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:24:09.365 12:51:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:09.365 12:51:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:09.365 12:51:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:09.365 12:51:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:09.365 12:51:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:09.365 12:51:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:09.365 12:51:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:09.365 12:51:42 -- nvmf/common.sh@294 -- # net_devs=() 00:24:09.365 12:51:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:09.365 12:51:42 -- nvmf/common.sh@295 -- # e810=() 00:24:09.365 12:51:42 -- nvmf/common.sh@295 -- # local -ga e810 00:24:09.365 12:51:42 -- nvmf/common.sh@296 -- # x722=() 00:24:09.365 12:51:42 -- nvmf/common.sh@296 -- # local -ga x722 00:24:09.365 12:51:42 -- nvmf/common.sh@297 -- # mlx=() 00:24:09.365 12:51:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:09.365 12:51:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.365 12:51:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.365 12:51:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.365 12:51:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.365 12:51:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.365 12:51:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.365 12:51:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.365 12:51:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.365 12:51:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.365 12:51:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.366 12:51:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.366 12:51:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:09.366 12:51:42 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:09.366 12:51:42 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:09.366 12:51:42 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:09.366 12:51:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:09.366 12:51:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:09.366 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:09.366 12:51:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:09.366 12:51:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:09.366 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:09.366 12:51:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:09.366 12:51:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:09.366 12:51:42 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.366 12:51:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:09.366 12:51:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.366 12:51:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:09.366 Found net devices under 0000:98:00.0: mlx_0_0 00:24:09.366 12:51:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.366 12:51:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.366 12:51:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:09.366 12:51:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.366 12:51:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:09.366 Found net devices under 0000:98:00.1: mlx_0_1 00:24:09.366 12:51:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.366 12:51:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:09.366 12:51:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:09.366 12:51:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:09.366 12:51:42 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:09.366 12:51:42 -- nvmf/common.sh@57 -- # uname 00:24:09.366 12:51:42 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:09.366 12:51:42 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:09.366 12:51:42 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:09.366 12:51:42 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:09.366 12:51:42 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:09.366 12:51:42 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:09.366 12:51:42 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:09.366 12:51:42 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:09.366 12:51:42 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:09.366 12:51:42 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:09.366 12:51:42 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:09.366 12:51:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:09.366 12:51:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:09.366 12:51:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:09.366 12:51:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:09.366 12:51:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:09.366 12:51:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:09.366 12:51:42 -- nvmf/common.sh@104 -- # continue 2 00:24:09.366 12:51:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:09.366 12:51:42 -- nvmf/common.sh@104 -- # continue 2 00:24:09.366 12:51:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:09.366 12:51:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:09.366 12:51:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:09.366 12:51:42 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:09.366 12:51:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:09.366 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:09.366 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:24:09.366 altname enp152s0f0np0 00:24:09.366 altname ens817f0np0 00:24:09.366 inet 192.168.100.8/24 scope global mlx_0_0 00:24:09.366 valid_lft forever preferred_lft forever 00:24:09.366 12:51:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:09.366 12:51:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:09.366 12:51:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:09.366 12:51:42 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:09.366 12:51:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:09.366 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:09.366 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:24:09.366 altname enp152s0f1np1 00:24:09.366 altname ens817f1np1 00:24:09.366 inet 192.168.100.9/24 scope global mlx_0_1 00:24:09.366 valid_lft forever preferred_lft forever 00:24:09.366 12:51:42 -- nvmf/common.sh@410 -- # return 0 00:24:09.366 12:51:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:09.366 12:51:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:09.366 12:51:42 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:09.366 12:51:42 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:09.366 12:51:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:09.366 12:51:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:09.366 12:51:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:09.366 12:51:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:09.366 12:51:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:09.366 12:51:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:09.366 12:51:42 -- nvmf/common.sh@104 -- # continue 2 00:24:09.366 12:51:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.366 12:51:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:09.366 12:51:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:09.366 12:51:42 -- nvmf/common.sh@104 -- # continue 2 00:24:09.366 12:51:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:09.366 12:51:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:09.366 12:51:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:09.366 12:51:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:09.366 12:51:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:09.366 12:51:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:09.366 12:51:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:09.366 12:51:42 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:09.366 192.168.100.9' 00:24:09.366 12:51:42 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:09.366 192.168.100.9' 00:24:09.366 12:51:42 -- nvmf/common.sh@445 -- # head -n 1 00:24:09.366 12:51:42 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:09.366 12:51:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:09.366 192.168.100.9' 00:24:09.366 12:51:42 -- nvmf/common.sh@446 -- # tail -n +2 00:24:09.366 12:51:42 -- nvmf/common.sh@446 -- # head -n 1 00:24:09.366 12:51:42 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:09.366 12:51:42 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:09.366 12:51:42 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:09.367 12:51:42 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:09.367 12:51:42 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:09.367 12:51:42 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:09.367 12:51:42 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:09.367 12:51:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:09.367 12:51:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.367 12:51:42 -- common/autotest_common.sh@10 -- # set +x 00:24:09.367 12:51:42 -- nvmf/common.sh@469 -- # nvmfpid=609863 00:24:09.367 12:51:42 -- nvmf/common.sh@470 -- # waitforlisten 609863 00:24:09.367 12:51:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:09.367 12:51:42 -- common/autotest_common.sh@829 -- # '[' -z 609863 ']' 00:24:09.367 12:51:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.367 12:51:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.367 12:51:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.367 12:51:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.367 12:51:42 -- common/autotest_common.sh@10 -- # set +x 00:24:09.367 [2024-11-20 12:51:42.447788] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:09.367 [2024-11-20 12:51:42.447849] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.627 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.627 [2024-11-20 12:51:42.524829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.627 [2024-11-20 12:51:42.587567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:09.627 [2024-11-20 12:51:42.587696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.627 [2024-11-20 12:51:42.587706] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.627 [2024-11-20 12:51:42.587720] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.627 [2024-11-20 12:51:42.587837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.627 [2024-11-20 12:51:42.587975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.627 [2024-11-20 12:51:42.588131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.627 [2024-11-20 12:51:42.588132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:10.199 12:51:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.199 12:51:43 -- common/autotest_common.sh@862 -- # return 0 00:24:10.199 12:51:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:10.199 12:51:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.199 12:51:43 -- common/autotest_common.sh@10 -- # set +x 00:24:10.460 12:51:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.460 12:51:43 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:10.460 12:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.460 12:51:43 -- common/autotest_common.sh@10 -- # set +x 00:24:10.460 [2024-11-20 12:51:43.361780] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xaedac0/0xaf1fb0) succeed. 00:24:10.460 [2024-11-20 12:51:43.377999] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xaef0b0/0xb33650) succeed. 00:24:10.460 12:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.460 12:51:43 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:10.460 12:51:43 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:10.460 12:51:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:10.460 12:51:43 -- common/autotest_common.sh@10 -- # set +x 00:24:10.460 12:51:43 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:10.460 12:51:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:10.460 12:51:43 -- target/shutdown.sh@28 -- # cat 00:24:10.460 12:51:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:10.460 12:51:43 -- target/shutdown.sh@28 -- # cat 00:24:10.460 12:51:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:10.460 12:51:43 -- target/shutdown.sh@28 -- # cat 00:24:10.460 12:51:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:10.460 12:51:43 -- target/shutdown.sh@28 -- # cat 00:24:10.460 12:51:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:10.460 12:51:43 -- target/shutdown.sh@28 -- # cat 00:24:10.460 12:51:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:10.460 12:51:43 -- target/shutdown.sh@28 -- # cat 00:24:10.460 12:51:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:10.460 12:51:43 -- target/shutdown.sh@28 -- # cat 00:24:10.460 12:51:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:10.460 12:51:43 -- target/shutdown.sh@28 -- # cat 00:24:10.460 12:51:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:10.460 12:51:43 -- target/shutdown.sh@28 -- # cat 00:24:10.460 12:51:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:10.460 12:51:43 -- target/shutdown.sh@28 -- # cat 00:24:10.460 12:51:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:10.460 12:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.460 12:51:43 -- common/autotest_common.sh@10 -- # set +x 00:24:10.721 Malloc1 00:24:10.721 [2024-11-20 12:51:43.598654] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:10.721 Malloc2 00:24:10.721 Malloc3 00:24:10.721 Malloc4 00:24:10.721 Malloc5 00:24:10.721 Malloc6 00:24:10.983 Malloc7 00:24:10.983 Malloc8 00:24:10.983 Malloc9 00:24:10.983 Malloc10 00:24:10.983 12:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.983 12:51:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:10.983 12:51:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.983 12:51:43 -- common/autotest_common.sh@10 -- # set +x 00:24:10.983 12:51:44 -- target/shutdown.sh@78 -- # perfpid=610180 00:24:10.983 12:51:44 -- target/shutdown.sh@79 -- # waitforlisten 610180 /var/tmp/bdevperf.sock 00:24:10.983 12:51:44 -- common/autotest_common.sh@829 -- # '[' -z 610180 ']' 00:24:10.983 12:51:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.983 12:51:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.983 12:51:44 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:10.983 12:51:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.983 12:51:44 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:10.983 12:51:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.983 12:51:44 -- common/autotest_common.sh@10 -- # set +x 00:24:10.983 12:51:44 -- nvmf/common.sh@520 -- # config=() 00:24:10.983 12:51:44 -- nvmf/common.sh@520 -- # local subsystem config 00:24:10.983 12:51:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.983 12:51:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.983 { 00:24:10.983 "params": { 00:24:10.983 "name": "Nvme$subsystem", 00:24:10.983 "trtype": "$TEST_TRANSPORT", 00:24:10.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.983 "adrfam": "ipv4", 00:24:10.983 "trsvcid": "$NVMF_PORT", 00:24:10.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.983 "hdgst": ${hdgst:-false}, 00:24:10.983 "ddgst": ${ddgst:-false} 00:24:10.983 }, 00:24:10.984 "method": "bdev_nvme_attach_controller" 00:24:10.984 } 00:24:10.984 EOF 00:24:10.984 )") 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # cat 00:24:10.984 12:51:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.984 { 00:24:10.984 "params": { 00:24:10.984 "name": "Nvme$subsystem", 00:24:10.984 "trtype": "$TEST_TRANSPORT", 00:24:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.984 "adrfam": "ipv4", 00:24:10.984 "trsvcid": "$NVMF_PORT", 00:24:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.984 "hdgst": ${hdgst:-false}, 00:24:10.984 "ddgst": ${ddgst:-false} 00:24:10.984 }, 00:24:10.984 "method": "bdev_nvme_attach_controller" 00:24:10.984 } 00:24:10.984 EOF 00:24:10.984 )") 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # cat 00:24:10.984 12:51:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.984 { 00:24:10.984 "params": { 00:24:10.984 "name": "Nvme$subsystem", 00:24:10.984 "trtype": "$TEST_TRANSPORT", 00:24:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.984 "adrfam": "ipv4", 00:24:10.984 "trsvcid": "$NVMF_PORT", 00:24:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.984 "hdgst": ${hdgst:-false}, 00:24:10.984 "ddgst": ${ddgst:-false} 00:24:10.984 }, 00:24:10.984 "method": "bdev_nvme_attach_controller" 00:24:10.984 } 00:24:10.984 EOF 00:24:10.984 )") 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # cat 00:24:10.984 12:51:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.984 { 00:24:10.984 "params": { 00:24:10.984 "name": "Nvme$subsystem", 00:24:10.984 "trtype": "$TEST_TRANSPORT", 00:24:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.984 "adrfam": "ipv4", 00:24:10.984 "trsvcid": "$NVMF_PORT", 00:24:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.984 "hdgst": ${hdgst:-false}, 00:24:10.984 "ddgst": ${ddgst:-false} 00:24:10.984 }, 00:24:10.984 "method": "bdev_nvme_attach_controller" 00:24:10.984 } 00:24:10.984 EOF 00:24:10.984 )") 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # cat 00:24:10.984 12:51:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.984 { 00:24:10.984 "params": { 00:24:10.984 "name": "Nvme$subsystem", 00:24:10.984 "trtype": "$TEST_TRANSPORT", 00:24:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.984 "adrfam": "ipv4", 00:24:10.984 "trsvcid": "$NVMF_PORT", 00:24:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.984 "hdgst": ${hdgst:-false}, 00:24:10.984 "ddgst": ${ddgst:-false} 00:24:10.984 }, 00:24:10.984 "method": "bdev_nvme_attach_controller" 00:24:10.984 } 00:24:10.984 EOF 00:24:10.984 )") 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # cat 00:24:10.984 12:51:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.984 { 00:24:10.984 "params": { 00:24:10.984 "name": "Nvme$subsystem", 00:24:10.984 "trtype": "$TEST_TRANSPORT", 00:24:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.984 "adrfam": "ipv4", 00:24:10.984 "trsvcid": "$NVMF_PORT", 00:24:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.984 "hdgst": ${hdgst:-false}, 00:24:10.984 "ddgst": ${ddgst:-false} 00:24:10.984 }, 00:24:10.984 "method": "bdev_nvme_attach_controller" 00:24:10.984 } 00:24:10.984 EOF 00:24:10.984 )") 00:24:10.984 [2024-11-20 12:51:44.056773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:10.984 [2024-11-20 12:51:44.056828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # cat 00:24:10.984 12:51:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.984 { 00:24:10.984 "params": { 00:24:10.984 "name": "Nvme$subsystem", 00:24:10.984 "trtype": "$TEST_TRANSPORT", 00:24:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.984 "adrfam": "ipv4", 00:24:10.984 "trsvcid": "$NVMF_PORT", 00:24:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.984 "hdgst": ${hdgst:-false}, 00:24:10.984 "ddgst": ${ddgst:-false} 00:24:10.984 }, 00:24:10.984 "method": "bdev_nvme_attach_controller" 00:24:10.984 } 00:24:10.984 EOF 00:24:10.984 )") 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # cat 00:24:10.984 12:51:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.984 { 00:24:10.984 "params": { 00:24:10.984 "name": "Nvme$subsystem", 00:24:10.984 "trtype": "$TEST_TRANSPORT", 00:24:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.984 "adrfam": "ipv4", 00:24:10.984 "trsvcid": "$NVMF_PORT", 00:24:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.984 "hdgst": ${hdgst:-false}, 00:24:10.984 "ddgst": ${ddgst:-false} 00:24:10.984 }, 00:24:10.984 "method": "bdev_nvme_attach_controller" 00:24:10.984 } 00:24:10.984 EOF 00:24:10.984 )") 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # cat 00:24:10.984 12:51:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.984 { 00:24:10.984 "params": { 00:24:10.984 "name": "Nvme$subsystem", 00:24:10.984 "trtype": "$TEST_TRANSPORT", 00:24:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.984 "adrfam": "ipv4", 00:24:10.984 "trsvcid": "$NVMF_PORT", 00:24:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.984 "hdgst": ${hdgst:-false}, 00:24:10.984 "ddgst": ${ddgst:-false} 00:24:10.984 }, 00:24:10.984 "method": "bdev_nvme_attach_controller" 00:24:10.984 } 00:24:10.984 EOF 00:24:10.984 )") 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # cat 00:24:10.984 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.984 12:51:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.984 { 00:24:10.984 "params": { 00:24:10.984 "name": "Nvme$subsystem", 00:24:10.984 "trtype": "$TEST_TRANSPORT", 00:24:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.984 "adrfam": "ipv4", 00:24:10.984 "trsvcid": "$NVMF_PORT", 00:24:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.984 "hdgst": ${hdgst:-false}, 00:24:10.984 "ddgst": ${ddgst:-false} 00:24:10.984 }, 00:24:10.984 "method": "bdev_nvme_attach_controller" 00:24:10.984 } 00:24:10.984 EOF 00:24:10.984 )") 00:24:10.984 12:51:44 -- nvmf/common.sh@542 -- # cat 00:24:11.246 12:51:44 -- nvmf/common.sh@544 -- # jq . 00:24:11.246 12:51:44 -- nvmf/common.sh@545 -- # IFS=, 00:24:11.246 12:51:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:11.246 "params": { 00:24:11.246 "name": "Nvme1", 00:24:11.246 "trtype": "rdma", 00:24:11.246 "traddr": "192.168.100.8", 00:24:11.246 "adrfam": "ipv4", 00:24:11.246 "trsvcid": "4420", 00:24:11.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.246 "hdgst": false, 00:24:11.246 "ddgst": false 00:24:11.246 }, 00:24:11.246 "method": "bdev_nvme_attach_controller" 00:24:11.246 },{ 00:24:11.246 "params": { 00:24:11.246 "name": "Nvme2", 00:24:11.246 "trtype": "rdma", 00:24:11.246 "traddr": "192.168.100.8", 00:24:11.246 "adrfam": "ipv4", 00:24:11.246 "trsvcid": "4420", 00:24:11.246 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:11.246 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:11.246 "hdgst": false, 00:24:11.246 "ddgst": false 00:24:11.246 }, 00:24:11.246 "method": "bdev_nvme_attach_controller" 00:24:11.246 },{ 00:24:11.246 "params": { 00:24:11.246 "name": "Nvme3", 00:24:11.246 "trtype": "rdma", 00:24:11.246 "traddr": "192.168.100.8", 00:24:11.246 "adrfam": "ipv4", 00:24:11.246 "trsvcid": "4420", 00:24:11.246 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:11.246 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:11.246 "hdgst": false, 00:24:11.246 "ddgst": false 00:24:11.246 }, 00:24:11.246 "method": "bdev_nvme_attach_controller" 00:24:11.246 },{ 00:24:11.246 "params": { 00:24:11.246 "name": "Nvme4", 00:24:11.246 "trtype": "rdma", 00:24:11.246 "traddr": "192.168.100.8", 00:24:11.246 "adrfam": "ipv4", 00:24:11.246 "trsvcid": "4420", 00:24:11.246 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:11.247 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:11.247 "hdgst": false, 00:24:11.247 "ddgst": false 00:24:11.247 }, 00:24:11.247 "method": "bdev_nvme_attach_controller" 00:24:11.247 },{ 00:24:11.247 "params": { 00:24:11.247 "name": "Nvme5", 00:24:11.247 "trtype": "rdma", 00:24:11.247 "traddr": "192.168.100.8", 00:24:11.247 "adrfam": "ipv4", 00:24:11.247 "trsvcid": "4420", 00:24:11.247 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:11.247 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:11.247 "hdgst": false, 00:24:11.247 "ddgst": false 00:24:11.247 }, 00:24:11.247 "method": "bdev_nvme_attach_controller" 00:24:11.247 },{ 00:24:11.247 "params": { 00:24:11.247 "name": "Nvme6", 00:24:11.247 "trtype": "rdma", 00:24:11.247 "traddr": "192.168.100.8", 00:24:11.247 "adrfam": "ipv4", 00:24:11.247 "trsvcid": "4420", 00:24:11.247 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:11.247 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:11.247 "hdgst": false, 00:24:11.247 "ddgst": false 00:24:11.247 }, 00:24:11.247 "method": "bdev_nvme_attach_controller" 00:24:11.247 },{ 00:24:11.247 "params": { 00:24:11.247 "name": "Nvme7", 00:24:11.247 "trtype": "rdma", 00:24:11.247 "traddr": "192.168.100.8", 00:24:11.247 "adrfam": "ipv4", 00:24:11.247 "trsvcid": "4420", 00:24:11.247 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:11.247 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:11.247 "hdgst": false, 00:24:11.247 "ddgst": false 00:24:11.247 }, 00:24:11.247 "method": "bdev_nvme_attach_controller" 00:24:11.247 },{ 00:24:11.247 "params": { 00:24:11.247 "name": "Nvme8", 00:24:11.247 "trtype": "rdma", 00:24:11.247 "traddr": "192.168.100.8", 00:24:11.247 "adrfam": "ipv4", 00:24:11.247 "trsvcid": "4420", 00:24:11.247 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:11.247 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:11.247 "hdgst": false, 00:24:11.247 "ddgst": false 00:24:11.247 }, 00:24:11.247 "method": "bdev_nvme_attach_controller" 00:24:11.247 },{ 00:24:11.247 "params": { 00:24:11.247 "name": "Nvme9", 00:24:11.247 "trtype": "rdma", 00:24:11.247 "traddr": "192.168.100.8", 00:24:11.247 "adrfam": "ipv4", 00:24:11.247 "trsvcid": "4420", 00:24:11.247 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:11.247 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:11.247 "hdgst": false, 00:24:11.247 "ddgst": false 00:24:11.247 }, 00:24:11.247 "method": "bdev_nvme_attach_controller" 00:24:11.247 },{ 00:24:11.247 "params": { 00:24:11.247 "name": "Nvme10", 00:24:11.247 "trtype": "rdma", 00:24:11.247 "traddr": "192.168.100.8", 00:24:11.247 "adrfam": "ipv4", 00:24:11.247 "trsvcid": "4420", 00:24:11.247 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:11.247 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:11.247 "hdgst": false, 00:24:11.247 "ddgst": false 00:24:11.247 }, 00:24:11.247 "method": "bdev_nvme_attach_controller" 00:24:11.247 }' 00:24:11.247 [2024-11-20 12:51:44.119488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.247 [2024-11-20 12:51:44.182319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.632 12:51:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.632 12:51:45 -- common/autotest_common.sh@862 -- # return 0 00:24:12.632 12:51:45 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:12.632 12:51:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.632 12:51:45 -- common/autotest_common.sh@10 -- # set +x 00:24:12.632 12:51:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.632 12:51:45 -- target/shutdown.sh@83 -- # kill -9 610180 00:24:12.632 12:51:45 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:12.632 12:51:45 -- target/shutdown.sh@87 -- # sleep 1 00:24:13.575 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 610180 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:13.575 12:51:46 -- target/shutdown.sh@88 -- # kill -0 609863 00:24:13.575 12:51:46 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:13.575 12:51:46 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:13.575 12:51:46 -- nvmf/common.sh@520 -- # config=() 00:24:13.575 12:51:46 -- nvmf/common.sh@520 -- # local subsystem config 00:24:13.575 12:51:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.575 { 00:24:13.575 "params": { 00:24:13.575 "name": "Nvme$subsystem", 00:24:13.575 "trtype": "$TEST_TRANSPORT", 00:24:13.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.575 "adrfam": "ipv4", 00:24:13.575 "trsvcid": "$NVMF_PORT", 00:24:13.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.575 "hdgst": ${hdgst:-false}, 00:24:13.575 "ddgst": ${ddgst:-false} 00:24:13.575 }, 00:24:13.575 "method": "bdev_nvme_attach_controller" 00:24:13.575 } 00:24:13.575 EOF 00:24:13.575 )") 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # cat 00:24:13.575 12:51:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.575 { 00:24:13.575 "params": { 00:24:13.575 "name": "Nvme$subsystem", 00:24:13.575 "trtype": "$TEST_TRANSPORT", 00:24:13.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.575 "adrfam": "ipv4", 00:24:13.575 "trsvcid": "$NVMF_PORT", 00:24:13.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.575 "hdgst": ${hdgst:-false}, 00:24:13.575 "ddgst": ${ddgst:-false} 00:24:13.575 }, 00:24:13.575 "method": "bdev_nvme_attach_controller" 00:24:13.575 } 00:24:13.575 EOF 00:24:13.575 )") 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # cat 00:24:13.575 12:51:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.575 { 00:24:13.575 "params": { 00:24:13.575 "name": "Nvme$subsystem", 00:24:13.575 "trtype": "$TEST_TRANSPORT", 00:24:13.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.575 "adrfam": "ipv4", 00:24:13.575 "trsvcid": "$NVMF_PORT", 00:24:13.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.575 "hdgst": ${hdgst:-false}, 00:24:13.575 "ddgst": ${ddgst:-false} 00:24:13.575 }, 00:24:13.575 "method": "bdev_nvme_attach_controller" 00:24:13.575 } 00:24:13.575 EOF 00:24:13.575 )") 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # cat 00:24:13.575 12:51:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.575 { 00:24:13.575 "params": { 00:24:13.575 "name": "Nvme$subsystem", 00:24:13.575 "trtype": "$TEST_TRANSPORT", 00:24:13.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.575 "adrfam": "ipv4", 00:24:13.575 "trsvcid": "$NVMF_PORT", 00:24:13.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.575 "hdgst": ${hdgst:-false}, 00:24:13.575 "ddgst": ${ddgst:-false} 00:24:13.575 }, 00:24:13.575 "method": "bdev_nvme_attach_controller" 00:24:13.575 } 00:24:13.575 EOF 00:24:13.575 )") 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # cat 00:24:13.575 12:51:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.575 { 00:24:13.575 "params": { 00:24:13.575 "name": "Nvme$subsystem", 00:24:13.575 "trtype": "$TEST_TRANSPORT", 00:24:13.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.575 "adrfam": "ipv4", 00:24:13.575 "trsvcid": "$NVMF_PORT", 00:24:13.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.575 "hdgst": ${hdgst:-false}, 00:24:13.575 "ddgst": ${ddgst:-false} 00:24:13.575 }, 00:24:13.575 "method": "bdev_nvme_attach_controller" 00:24:13.575 } 00:24:13.575 EOF 00:24:13.575 )") 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # cat 00:24:13.575 12:51:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.575 12:51:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.575 { 00:24:13.575 "params": { 00:24:13.575 "name": "Nvme$subsystem", 00:24:13.575 "trtype": "$TEST_TRANSPORT", 00:24:13.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.575 "adrfam": "ipv4", 00:24:13.575 "trsvcid": "$NVMF_PORT", 00:24:13.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.575 "hdgst": ${hdgst:-false}, 00:24:13.575 "ddgst": ${ddgst:-false} 00:24:13.575 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 } 00:24:13.576 EOF 00:24:13.576 )") 00:24:13.576 12:51:46 -- nvmf/common.sh@542 -- # cat 00:24:13.576 12:51:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.576 12:51:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.576 { 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme$subsystem", 00:24:13.576 "trtype": "$TEST_TRANSPORT", 00:24:13.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "$NVMF_PORT", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.576 "hdgst": ${hdgst:-false}, 00:24:13.576 "ddgst": ${ddgst:-false} 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 } 00:24:13.576 EOF 00:24:13.576 )") 00:24:13.576 12:51:46 -- nvmf/common.sh@542 -- # cat 00:24:13.576 12:51:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.576 12:51:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.576 { 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme$subsystem", 00:24:13.576 "trtype": "$TEST_TRANSPORT", 00:24:13.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "$NVMF_PORT", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.576 "hdgst": ${hdgst:-false}, 00:24:13.576 "ddgst": ${ddgst:-false} 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 } 00:24:13.576 EOF 00:24:13.576 )") 00:24:13.576 [2024-11-20 12:51:46.577354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:13.576 [2024-11-20 12:51:46.577417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid610686 ] 00:24:13.576 12:51:46 -- nvmf/common.sh@542 -- # cat 00:24:13.576 12:51:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.576 12:51:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.576 { 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme$subsystem", 00:24:13.576 "trtype": "$TEST_TRANSPORT", 00:24:13.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "$NVMF_PORT", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.576 "hdgst": ${hdgst:-false}, 00:24:13.576 "ddgst": ${ddgst:-false} 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 } 00:24:13.576 EOF 00:24:13.576 )") 00:24:13.576 12:51:46 -- nvmf/common.sh@542 -- # cat 00:24:13.576 12:51:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.576 12:51:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.576 { 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme$subsystem", 00:24:13.576 "trtype": "$TEST_TRANSPORT", 00:24:13.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "$NVMF_PORT", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.576 "hdgst": ${hdgst:-false}, 00:24:13.576 "ddgst": ${ddgst:-false} 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 } 00:24:13.576 EOF 00:24:13.576 )") 00:24:13.576 12:51:46 -- nvmf/common.sh@542 -- # cat 00:24:13.576 12:51:46 -- nvmf/common.sh@544 -- # jq . 00:24:13.576 12:51:46 -- nvmf/common.sh@545 -- # IFS=, 00:24:13.576 12:51:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme1", 00:24:13.576 "trtype": "rdma", 00:24:13.576 "traddr": "192.168.100.8", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "4420", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.576 "hdgst": false, 00:24:13.576 "ddgst": false 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 },{ 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme2", 00:24:13.576 "trtype": "rdma", 00:24:13.576 "traddr": "192.168.100.8", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "4420", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:13.576 "hdgst": false, 00:24:13.576 "ddgst": false 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 },{ 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme3", 00:24:13.576 "trtype": "rdma", 00:24:13.576 "traddr": "192.168.100.8", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "4420", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:13.576 "hdgst": false, 00:24:13.576 "ddgst": false 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 },{ 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme4", 00:24:13.576 "trtype": "rdma", 00:24:13.576 "traddr": "192.168.100.8", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "4420", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:13.576 "hdgst": false, 00:24:13.576 "ddgst": false 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 },{ 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme5", 00:24:13.576 "trtype": "rdma", 00:24:13.576 "traddr": "192.168.100.8", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "4420", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:13.576 "hdgst": false, 00:24:13.576 "ddgst": false 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 },{ 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme6", 00:24:13.576 "trtype": "rdma", 00:24:13.576 "traddr": "192.168.100.8", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "4420", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:13.576 "hdgst": false, 00:24:13.576 "ddgst": false 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 },{ 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme7", 00:24:13.576 "trtype": "rdma", 00:24:13.576 "traddr": "192.168.100.8", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "4420", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:13.576 "hdgst": false, 00:24:13.576 "ddgst": false 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 },{ 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme8", 00:24:13.576 "trtype": "rdma", 00:24:13.576 "traddr": "192.168.100.8", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "4420", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:13.576 "hdgst": false, 00:24:13.576 "ddgst": false 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 },{ 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme9", 00:24:13.576 "trtype": "rdma", 00:24:13.576 "traddr": "192.168.100.8", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "4420", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:13.576 "hdgst": false, 00:24:13.576 "ddgst": false 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 },{ 00:24:13.576 "params": { 00:24:13.576 "name": "Nvme10", 00:24:13.576 "trtype": "rdma", 00:24:13.576 "traddr": "192.168.100.8", 00:24:13.576 "adrfam": "ipv4", 00:24:13.576 "trsvcid": "4420", 00:24:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:13.576 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:13.576 "hdgst": false, 00:24:13.576 "ddgst": false 00:24:13.576 }, 00:24:13.576 "method": "bdev_nvme_attach_controller" 00:24:13.576 }' 00:24:13.576 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.576 [2024-11-20 12:51:46.639253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.837 [2024-11-20 12:51:46.701419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.781 Running I/O for 1 seconds... 00:24:15.725 00:24:15.725 Latency(us) 00:24:15.725 [2024-11-20T11:51:48.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.726 [2024-11-20T11:51:48.834Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.726 Verification LBA range: start 0x0 length 0x400 00:24:15.726 Nvme1n1 : 1.13 571.74 35.73 0.00 0.00 110343.21 9011.20 149422.08 00:24:15.726 [2024-11-20T11:51:48.834Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.726 Verification LBA range: start 0x0 length 0x400 00:24:15.726 Nvme2n1 : 1.13 578.10 36.13 0.00 0.00 108347.94 9393.49 96556.37 00:24:15.726 [2024-11-20T11:51:48.834Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.726 Verification LBA range: start 0x0 length 0x400 00:24:15.726 Nvme3n1 : 1.14 577.41 36.09 0.00 0.00 107697.08 9830.40 92624.21 00:24:15.726 [2024-11-20T11:51:48.834Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.726 Verification LBA range: start 0x0 length 0x400 00:24:15.726 Nvme4n1 : 1.14 576.72 36.04 0.00 0.00 107045.38 10212.69 90876.59 00:24:15.726 [2024-11-20T11:51:48.834Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.726 Verification LBA range: start 0x0 length 0x400 00:24:15.726 Nvme5n1 : 1.14 576.03 36.00 0.00 0.00 106400.26 10594.99 90002.77 00:24:15.726 [2024-11-20T11:51:48.834Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.726 Verification LBA range: start 0x0 length 0x400 00:24:15.726 Nvme6n1 : 1.14 575.35 35.96 0.00 0.00 105754.06 10977.28 91750.40 00:24:15.726 [2024-11-20T11:51:48.834Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.726 Verification LBA range: start 0x0 length 0x400 00:24:15.726 Nvme7n1 : 1.14 574.66 35.92 0.00 0.00 105033.01 11414.19 94371.84 00:24:15.726 [2024-11-20T11:51:48.834Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.726 Verification LBA range: start 0x0 length 0x400 00:24:15.726 Nvme8n1 : 1.14 573.99 35.87 0.00 0.00 104328.37 11796.48 96993.28 00:24:15.726 [2024-11-20T11:51:48.834Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.726 Verification LBA range: start 0x0 length 0x400 00:24:15.726 Nvme9n1 : 1.14 573.31 35.83 0.00 0.00 103607.74 12178.77 99614.72 00:24:15.726 [2024-11-20T11:51:48.834Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.726 Verification LBA range: start 0x0 length 0x400 00:24:15.726 Nvme10n1 : 1.15 378.91 23.68 0.00 0.00 155050.15 9393.49 435159.04 00:24:15.726 [2024-11-20T11:51:48.834Z] =================================================================================================================== 00:24:15.726 [2024-11-20T11:51:48.834Z] Total : 5556.22 347.26 0.00 0.00 109829.77 9011.20 435159.04 00:24:15.987 12:51:48 -- target/shutdown.sh@93 -- # stoptarget 00:24:15.987 12:51:48 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:15.987 12:51:48 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:15.988 12:51:49 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:15.988 12:51:49 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:15.988 12:51:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:15.988 12:51:49 -- nvmf/common.sh@116 -- # sync 00:24:15.988 12:51:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:15.988 12:51:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:15.988 12:51:49 -- nvmf/common.sh@119 -- # set +e 00:24:15.988 12:51:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:15.988 12:51:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:15.988 rmmod nvme_rdma 00:24:15.988 rmmod nvme_fabrics 00:24:15.988 12:51:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:15.988 12:51:49 -- nvmf/common.sh@123 -- # set -e 00:24:15.988 12:51:49 -- nvmf/common.sh@124 -- # return 0 00:24:15.988 12:51:49 -- nvmf/common.sh@477 -- # '[' -n 609863 ']' 00:24:15.988 12:51:49 -- nvmf/common.sh@478 -- # killprocess 609863 00:24:15.988 12:51:49 -- common/autotest_common.sh@936 -- # '[' -z 609863 ']' 00:24:15.988 12:51:49 -- common/autotest_common.sh@940 -- # kill -0 609863 00:24:15.988 12:51:49 -- common/autotest_common.sh@941 -- # uname 00:24:15.988 12:51:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:15.988 12:51:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 609863 00:24:16.265 12:51:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:16.265 12:51:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:16.266 12:51:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 609863' 00:24:16.266 killing process with pid 609863 00:24:16.266 12:51:49 -- common/autotest_common.sh@955 -- # kill 609863 00:24:16.266 12:51:49 -- common/autotest_common.sh@960 -- # wait 609863 00:24:16.531 12:51:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:16.531 12:51:49 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:16.531 00:24:16.531 real 0m14.261s 00:24:16.531 user 0m32.890s 00:24:16.531 sys 0m6.288s 00:24:16.531 12:51:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:16.531 12:51:49 -- common/autotest_common.sh@10 -- # set +x 00:24:16.531 ************************************ 00:24:16.531 END TEST nvmf_shutdown_tc1 00:24:16.531 ************************************ 00:24:16.531 12:51:49 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:16.531 12:51:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:16.531 12:51:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:16.531 12:51:49 -- common/autotest_common.sh@10 -- # set +x 00:24:16.531 ************************************ 00:24:16.531 START TEST nvmf_shutdown_tc2 00:24:16.531 ************************************ 00:24:16.531 12:51:49 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc2 00:24:16.531 12:51:49 -- target/shutdown.sh@98 -- # starttarget 00:24:16.531 12:51:49 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:16.531 12:51:49 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:16.531 12:51:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.531 12:51:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:16.531 12:51:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:16.531 12:51:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:16.531 12:51:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.531 12:51:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.531 12:51:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.531 12:51:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:16.531 12:51:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:16.531 12:51:49 -- common/autotest_common.sh@10 -- # set +x 00:24:16.531 12:51:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:16.531 12:51:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:16.531 12:51:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:16.531 12:51:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:16.531 12:51:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:16.531 12:51:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:16.531 12:51:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:16.531 12:51:49 -- nvmf/common.sh@294 -- # net_devs=() 00:24:16.531 12:51:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:16.531 12:51:49 -- nvmf/common.sh@295 -- # e810=() 00:24:16.531 12:51:49 -- nvmf/common.sh@295 -- # local -ga e810 00:24:16.531 12:51:49 -- nvmf/common.sh@296 -- # x722=() 00:24:16.531 12:51:49 -- nvmf/common.sh@296 -- # local -ga x722 00:24:16.531 12:51:49 -- nvmf/common.sh@297 -- # mlx=() 00:24:16.531 12:51:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:16.531 12:51:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.531 12:51:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:16.531 12:51:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:16.531 12:51:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:16.531 12:51:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:16.531 12:51:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:16.531 12:51:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:16.531 12:51:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:16.531 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:16.531 12:51:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:16.531 12:51:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:16.531 12:51:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:16.531 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:16.531 12:51:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:16.531 12:51:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:16.531 12:51:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:16.531 12:51:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:16.531 12:51:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.531 12:51:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:16.531 12:51:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.531 12:51:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:16.531 Found net devices under 0000:98:00.0: mlx_0_0 00:24:16.531 12:51:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.531 12:51:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:16.531 12:51:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.531 12:51:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:16.531 12:51:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.531 12:51:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:16.531 Found net devices under 0000:98:00.1: mlx_0_1 00:24:16.531 12:51:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.531 12:51:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:16.531 12:51:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:16.531 12:51:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:16.532 12:51:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:16.532 12:51:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:16.532 12:51:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:16.532 12:51:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:16.532 12:51:49 -- nvmf/common.sh@57 -- # uname 00:24:16.532 12:51:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:16.532 12:51:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:16.532 12:51:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:16.532 12:51:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:16.532 12:51:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:16.532 12:51:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:16.532 12:51:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:16.532 12:51:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:16.532 12:51:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:16.532 12:51:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:16.532 12:51:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:16.532 12:51:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:16.532 12:51:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:16.532 12:51:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:16.532 12:51:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:16.532 12:51:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:16.532 12:51:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:16.532 12:51:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.532 12:51:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:16.532 12:51:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:16.532 12:51:49 -- nvmf/common.sh@104 -- # continue 2 00:24:16.532 12:51:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:16.532 12:51:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.532 12:51:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:16.532 12:51:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.532 12:51:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:16.532 12:51:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:16.532 12:51:49 -- nvmf/common.sh@104 -- # continue 2 00:24:16.532 12:51:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:16.532 12:51:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:16.532 12:51:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:16.532 12:51:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:16.532 12:51:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:16.532 12:51:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:16.532 12:51:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:16.532 12:51:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:16.532 12:51:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:16.532 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:16.532 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:24:16.532 altname enp152s0f0np0 00:24:16.532 altname ens817f0np0 00:24:16.532 inet 192.168.100.8/24 scope global mlx_0_0 00:24:16.532 valid_lft forever preferred_lft forever 00:24:16.532 12:51:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:16.532 12:51:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:16.532 12:51:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:16.532 12:51:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:16.532 12:51:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:16.532 12:51:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:16.532 12:51:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:16.532 12:51:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:16.532 12:51:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:16.532 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:16.532 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:24:16.532 altname enp152s0f1np1 00:24:16.532 altname ens817f1np1 00:24:16.532 inet 192.168.100.9/24 scope global mlx_0_1 00:24:16.532 valid_lft forever preferred_lft forever 00:24:16.532 12:51:49 -- nvmf/common.sh@410 -- # return 0 00:24:16.532 12:51:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:16.532 12:51:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:16.532 12:51:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:16.532 12:51:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:16.532 12:51:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:16.532 12:51:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:16.532 12:51:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:16.532 12:51:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:16.532 12:51:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:16.793 12:51:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:16.793 12:51:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:16.793 12:51:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.793 12:51:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:16.793 12:51:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:16.793 12:51:49 -- nvmf/common.sh@104 -- # continue 2 00:24:16.793 12:51:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:16.793 12:51:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.793 12:51:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:16.793 12:51:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.793 12:51:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:16.793 12:51:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:16.793 12:51:49 -- nvmf/common.sh@104 -- # continue 2 00:24:16.793 12:51:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:16.793 12:51:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:16.793 12:51:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:16.793 12:51:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:16.793 12:51:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:16.793 12:51:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:16.793 12:51:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:16.793 12:51:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:16.793 12:51:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:16.793 12:51:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:16.793 12:51:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:16.793 12:51:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:16.793 12:51:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:16.793 192.168.100.9' 00:24:16.793 12:51:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:16.793 192.168.100.9' 00:24:16.793 12:51:49 -- nvmf/common.sh@445 -- # head -n 1 00:24:16.793 12:51:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:16.793 12:51:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:16.793 192.168.100.9' 00:24:16.793 12:51:49 -- nvmf/common.sh@446 -- # tail -n +2 00:24:16.793 12:51:49 -- nvmf/common.sh@446 -- # head -n 1 00:24:16.793 12:51:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:16.793 12:51:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:16.793 12:51:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:16.793 12:51:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:16.793 12:51:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:16.793 12:51:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:16.793 12:51:49 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:16.793 12:51:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:16.793 12:51:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:16.793 12:51:49 -- common/autotest_common.sh@10 -- # set +x 00:24:16.793 12:51:49 -- nvmf/common.sh@469 -- # nvmfpid=611461 00:24:16.793 12:51:49 -- nvmf/common.sh@470 -- # waitforlisten 611461 00:24:16.793 12:51:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:16.793 12:51:49 -- common/autotest_common.sh@829 -- # '[' -z 611461 ']' 00:24:16.793 12:51:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.793 12:51:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.793 12:51:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.793 12:51:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.793 12:51:49 -- common/autotest_common.sh@10 -- # set +x 00:24:16.793 [2024-11-20 12:51:49.778718] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:16.793 [2024-11-20 12:51:49.778769] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.793 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.793 [2024-11-20 12:51:49.857841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.054 [2024-11-20 12:51:49.913434] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:17.054 [2024-11-20 12:51:49.913535] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.054 [2024-11-20 12:51:49.913541] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.054 [2024-11-20 12:51:49.913546] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.054 [2024-11-20 12:51:49.913669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.054 [2024-11-20 12:51:49.913827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.054 [2024-11-20 12:51:49.913986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.054 [2024-11-20 12:51:49.913997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:17.627 12:51:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.627 12:51:50 -- common/autotest_common.sh@862 -- # return 0 00:24:17.627 12:51:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:17.627 12:51:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:17.627 12:51:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.627 12:51:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.627 12:51:50 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:17.627 12:51:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.627 12:51:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.627 [2024-11-20 12:51:50.638466] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b71ac0/0x1b75fb0) succeed. 00:24:17.627 [2024-11-20 12:51:50.648314] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b730b0/0x1bb7650) succeed. 00:24:17.889 12:51:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.889 12:51:50 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:17.889 12:51:50 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:17.889 12:51:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:17.889 12:51:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.889 12:51:50 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:17.890 12:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:17.890 12:51:50 -- target/shutdown.sh@28 -- # cat 00:24:17.890 12:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:17.890 12:51:50 -- target/shutdown.sh@28 -- # cat 00:24:17.890 12:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:17.890 12:51:50 -- target/shutdown.sh@28 -- # cat 00:24:17.890 12:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:17.890 12:51:50 -- target/shutdown.sh@28 -- # cat 00:24:17.890 12:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:17.890 12:51:50 -- target/shutdown.sh@28 -- # cat 00:24:17.890 12:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:17.890 12:51:50 -- target/shutdown.sh@28 -- # cat 00:24:17.890 12:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:17.890 12:51:50 -- target/shutdown.sh@28 -- # cat 00:24:17.890 12:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:17.890 12:51:50 -- target/shutdown.sh@28 -- # cat 00:24:17.890 12:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:17.890 12:51:50 -- target/shutdown.sh@28 -- # cat 00:24:17.890 12:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:17.890 12:51:50 -- target/shutdown.sh@28 -- # cat 00:24:17.890 12:51:50 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:17.890 12:51:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.890 12:51:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.890 Malloc1 00:24:17.890 [2024-11-20 12:51:50.840733] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:17.890 Malloc2 00:24:17.890 Malloc3 00:24:17.890 Malloc4 00:24:17.890 Malloc5 00:24:18.151 Malloc6 00:24:18.151 Malloc7 00:24:18.151 Malloc8 00:24:18.151 Malloc9 00:24:18.151 Malloc10 00:24:18.151 12:51:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.151 12:51:51 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:18.151 12:51:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:18.151 12:51:51 -- common/autotest_common.sh@10 -- # set +x 00:24:18.151 12:51:51 -- target/shutdown.sh@102 -- # perfpid=611808 00:24:18.151 12:51:51 -- target/shutdown.sh@103 -- # waitforlisten 611808 /var/tmp/bdevperf.sock 00:24:18.151 12:51:51 -- common/autotest_common.sh@829 -- # '[' -z 611808 ']' 00:24:18.151 12:51:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.151 12:51:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.151 12:51:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.151 12:51:51 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:18.151 12:51:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.151 12:51:51 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:18.151 12:51:51 -- common/autotest_common.sh@10 -- # set +x 00:24:18.151 12:51:51 -- nvmf/common.sh@520 -- # config=() 00:24:18.151 12:51:51 -- nvmf/common.sh@520 -- # local subsystem config 00:24:18.151 12:51:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.151 12:51:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.151 { 00:24:18.151 "params": { 00:24:18.151 "name": "Nvme$subsystem", 00:24:18.151 "trtype": "$TEST_TRANSPORT", 00:24:18.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.151 "adrfam": "ipv4", 00:24:18.151 "trsvcid": "$NVMF_PORT", 00:24:18.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.151 "hdgst": ${hdgst:-false}, 00:24:18.151 "ddgst": ${ddgst:-false} 00:24:18.151 }, 00:24:18.151 "method": "bdev_nvme_attach_controller" 00:24:18.151 } 00:24:18.151 EOF 00:24:18.151 )") 00:24:18.151 12:51:51 -- nvmf/common.sh@542 -- # cat 00:24:18.151 12:51:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.151 12:51:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.151 { 00:24:18.151 "params": { 00:24:18.151 "name": "Nvme$subsystem", 00:24:18.151 "trtype": "$TEST_TRANSPORT", 00:24:18.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.151 "adrfam": "ipv4", 00:24:18.151 "trsvcid": "$NVMF_PORT", 00:24:18.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.151 "hdgst": ${hdgst:-false}, 00:24:18.151 "ddgst": ${ddgst:-false} 00:24:18.151 }, 00:24:18.151 "method": "bdev_nvme_attach_controller" 00:24:18.151 } 00:24:18.151 EOF 00:24:18.151 )") 00:24:18.151 12:51:51 -- nvmf/common.sh@542 -- # cat 00:24:18.413 12:51:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.413 { 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme$subsystem", 00:24:18.413 "trtype": "$TEST_TRANSPORT", 00:24:18.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "$NVMF_PORT", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.413 "hdgst": ${hdgst:-false}, 00:24:18.413 "ddgst": ${ddgst:-false} 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 } 00:24:18.413 EOF 00:24:18.413 )") 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # cat 00:24:18.413 12:51:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.413 { 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme$subsystem", 00:24:18.413 "trtype": "$TEST_TRANSPORT", 00:24:18.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "$NVMF_PORT", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.413 "hdgst": ${hdgst:-false}, 00:24:18.413 "ddgst": ${ddgst:-false} 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 } 00:24:18.413 EOF 00:24:18.413 )") 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # cat 00:24:18.413 12:51:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.413 { 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme$subsystem", 00:24:18.413 "trtype": "$TEST_TRANSPORT", 00:24:18.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "$NVMF_PORT", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.413 "hdgst": ${hdgst:-false}, 00:24:18.413 "ddgst": ${ddgst:-false} 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 } 00:24:18.413 EOF 00:24:18.413 )") 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # cat 00:24:18.413 12:51:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.413 { 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme$subsystem", 00:24:18.413 "trtype": "$TEST_TRANSPORT", 00:24:18.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "$NVMF_PORT", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.413 "hdgst": ${hdgst:-false}, 00:24:18.413 "ddgst": ${ddgst:-false} 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 } 00:24:18.413 EOF 00:24:18.413 )") 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # cat 00:24:18.413 [2024-11-20 12:51:51.284974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:18.413 [2024-11-20 12:51:51.285030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611808 ] 00:24:18.413 12:51:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.413 { 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme$subsystem", 00:24:18.413 "trtype": "$TEST_TRANSPORT", 00:24:18.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "$NVMF_PORT", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.413 "hdgst": ${hdgst:-false}, 00:24:18.413 "ddgst": ${ddgst:-false} 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 } 00:24:18.413 EOF 00:24:18.413 )") 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # cat 00:24:18.413 12:51:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.413 { 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme$subsystem", 00:24:18.413 "trtype": "$TEST_TRANSPORT", 00:24:18.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "$NVMF_PORT", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.413 "hdgst": ${hdgst:-false}, 00:24:18.413 "ddgst": ${ddgst:-false} 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 } 00:24:18.413 EOF 00:24:18.413 )") 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # cat 00:24:18.413 12:51:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.413 { 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme$subsystem", 00:24:18.413 "trtype": "$TEST_TRANSPORT", 00:24:18.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "$NVMF_PORT", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.413 "hdgst": ${hdgst:-false}, 00:24:18.413 "ddgst": ${ddgst:-false} 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 } 00:24:18.413 EOF 00:24:18.413 )") 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # cat 00:24:18.413 12:51:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.413 { 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme$subsystem", 00:24:18.413 "trtype": "$TEST_TRANSPORT", 00:24:18.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "$NVMF_PORT", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.413 "hdgst": ${hdgst:-false}, 00:24:18.413 "ddgst": ${ddgst:-false} 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 } 00:24:18.413 EOF 00:24:18.413 )") 00:24:18.413 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.413 12:51:51 -- nvmf/common.sh@542 -- # cat 00:24:18.413 12:51:51 -- nvmf/common.sh@544 -- # jq . 00:24:18.413 12:51:51 -- nvmf/common.sh@545 -- # IFS=, 00:24:18.413 12:51:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme1", 00:24:18.413 "trtype": "rdma", 00:24:18.413 "traddr": "192.168.100.8", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "4420", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.413 "hdgst": false, 00:24:18.413 "ddgst": false 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 },{ 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme2", 00:24:18.413 "trtype": "rdma", 00:24:18.413 "traddr": "192.168.100.8", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "4420", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:18.413 "hdgst": false, 00:24:18.413 "ddgst": false 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 },{ 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme3", 00:24:18.413 "trtype": "rdma", 00:24:18.413 "traddr": "192.168.100.8", 00:24:18.413 "adrfam": "ipv4", 00:24:18.413 "trsvcid": "4420", 00:24:18.413 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:18.413 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:18.413 "hdgst": false, 00:24:18.413 "ddgst": false 00:24:18.413 }, 00:24:18.413 "method": "bdev_nvme_attach_controller" 00:24:18.413 },{ 00:24:18.413 "params": { 00:24:18.413 "name": "Nvme4", 00:24:18.413 "trtype": "rdma", 00:24:18.413 "traddr": "192.168.100.8", 00:24:18.413 "adrfam": "ipv4", 00:24:18.414 "trsvcid": "4420", 00:24:18.414 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:18.414 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:18.414 "hdgst": false, 00:24:18.414 "ddgst": false 00:24:18.414 }, 00:24:18.414 "method": "bdev_nvme_attach_controller" 00:24:18.414 },{ 00:24:18.414 "params": { 00:24:18.414 "name": "Nvme5", 00:24:18.414 "trtype": "rdma", 00:24:18.414 "traddr": "192.168.100.8", 00:24:18.414 "adrfam": "ipv4", 00:24:18.414 "trsvcid": "4420", 00:24:18.414 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:18.414 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:18.414 "hdgst": false, 00:24:18.414 "ddgst": false 00:24:18.414 }, 00:24:18.414 "method": "bdev_nvme_attach_controller" 00:24:18.414 },{ 00:24:18.414 "params": { 00:24:18.414 "name": "Nvme6", 00:24:18.414 "trtype": "rdma", 00:24:18.414 "traddr": "192.168.100.8", 00:24:18.414 "adrfam": "ipv4", 00:24:18.414 "trsvcid": "4420", 00:24:18.414 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:18.414 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:18.414 "hdgst": false, 00:24:18.414 "ddgst": false 00:24:18.414 }, 00:24:18.414 "method": "bdev_nvme_attach_controller" 00:24:18.414 },{ 00:24:18.414 "params": { 00:24:18.414 "name": "Nvme7", 00:24:18.414 "trtype": "rdma", 00:24:18.414 "traddr": "192.168.100.8", 00:24:18.414 "adrfam": "ipv4", 00:24:18.414 "trsvcid": "4420", 00:24:18.414 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:18.414 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:18.414 "hdgst": false, 00:24:18.414 "ddgst": false 00:24:18.414 }, 00:24:18.414 "method": "bdev_nvme_attach_controller" 00:24:18.414 },{ 00:24:18.414 "params": { 00:24:18.414 "name": "Nvme8", 00:24:18.414 "trtype": "rdma", 00:24:18.414 "traddr": "192.168.100.8", 00:24:18.414 "adrfam": "ipv4", 00:24:18.414 "trsvcid": "4420", 00:24:18.414 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:18.414 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:18.414 "hdgst": false, 00:24:18.414 "ddgst": false 00:24:18.414 }, 00:24:18.414 "method": "bdev_nvme_attach_controller" 00:24:18.414 },{ 00:24:18.414 "params": { 00:24:18.414 "name": "Nvme9", 00:24:18.414 "trtype": "rdma", 00:24:18.414 "traddr": "192.168.100.8", 00:24:18.414 "adrfam": "ipv4", 00:24:18.414 "trsvcid": "4420", 00:24:18.414 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:18.414 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:18.414 "hdgst": false, 00:24:18.414 "ddgst": false 00:24:18.414 }, 00:24:18.414 "method": "bdev_nvme_attach_controller" 00:24:18.414 },{ 00:24:18.414 "params": { 00:24:18.414 "name": "Nvme10", 00:24:18.414 "trtype": "rdma", 00:24:18.414 "traddr": "192.168.100.8", 00:24:18.414 "adrfam": "ipv4", 00:24:18.414 "trsvcid": "4420", 00:24:18.414 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:18.414 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:18.414 "hdgst": false, 00:24:18.414 "ddgst": false 00:24:18.414 }, 00:24:18.414 "method": "bdev_nvme_attach_controller" 00:24:18.414 }' 00:24:18.414 [2024-11-20 12:51:51.347129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.414 [2024-11-20 12:51:51.409758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.358 Running I/O for 10 seconds... 00:24:19.931 12:51:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.931 12:51:52 -- common/autotest_common.sh@862 -- # return 0 00:24:19.931 12:51:52 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:19.931 12:51:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.931 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:24:19.931 12:51:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.931 12:51:52 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:19.931 12:51:52 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:19.931 12:51:52 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:19.931 12:51:52 -- target/shutdown.sh@57 -- # local ret=1 00:24:19.931 12:51:52 -- target/shutdown.sh@58 -- # local i 00:24:19.931 12:51:52 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:19.931 12:51:52 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:19.931 12:51:52 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:19.931 12:51:52 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:19.931 12:51:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.931 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:24:20.193 12:51:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.193 12:51:53 -- target/shutdown.sh@60 -- # read_io_count=373 00:24:20.193 12:51:53 -- target/shutdown.sh@63 -- # '[' 373 -ge 100 ']' 00:24:20.193 12:51:53 -- target/shutdown.sh@64 -- # ret=0 00:24:20.193 12:51:53 -- target/shutdown.sh@65 -- # break 00:24:20.193 12:51:53 -- target/shutdown.sh@69 -- # return 0 00:24:20.193 12:51:53 -- target/shutdown.sh@109 -- # killprocess 611808 00:24:20.193 12:51:53 -- common/autotest_common.sh@936 -- # '[' -z 611808 ']' 00:24:20.193 12:51:53 -- common/autotest_common.sh@940 -- # kill -0 611808 00:24:20.193 12:51:53 -- common/autotest_common.sh@941 -- # uname 00:24:20.193 12:51:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:20.193 12:51:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 611808 00:24:20.193 12:51:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:20.193 12:51:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:20.193 12:51:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 611808' 00:24:20.193 killing process with pid 611808 00:24:20.193 12:51:53 -- common/autotest_common.sh@955 -- # kill 611808 00:24:20.193 12:51:53 -- common/autotest_common.sh@960 -- # wait 611808 00:24:20.193 Received shutdown signal, test time was about 0.959976 seconds 00:24:20.193 00:24:20.193 Latency(us) 00:24:20.193 [2024-11-20T11:51:53.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.193 [2024-11-20T11:51:53.301Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.193 Verification LBA range: start 0x0 length 0x400 00:24:20.193 Nvme1n1 : 0.95 596.12 37.26 0.00 0.00 105591.01 9065.81 93934.93 00:24:20.193 [2024-11-20T11:51:53.301Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.193 Verification LBA range: start 0x0 length 0x400 00:24:20.193 Nvme2n1 : 0.95 595.36 37.21 0.00 0.00 104848.66 9284.27 92624.21 00:24:20.193 [2024-11-20T11:51:53.301Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.193 Verification LBA range: start 0x0 length 0x400 00:24:20.193 Nvme3n1 : 0.95 597.73 37.36 0.00 0.00 103604.62 9557.33 90876.59 00:24:20.193 [2024-11-20T11:51:53.301Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.193 Verification LBA range: start 0x0 length 0x400 00:24:20.193 Nvme4n1 : 0.95 600.09 37.51 0.00 0.00 102326.35 9830.40 89128.96 00:24:20.193 [2024-11-20T11:51:53.301Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.193 Verification LBA range: start 0x0 length 0x400 00:24:20.193 Nvme5n1 : 0.95 593.03 37.06 0.00 0.00 102632.90 10048.85 88255.15 00:24:20.193 [2024-11-20T11:51:53.301Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.193 Verification LBA range: start 0x0 length 0x400 00:24:20.193 Nvme6n1 : 0.95 592.28 37.02 0.00 0.00 101806.16 10267.31 89565.87 00:24:20.193 [2024-11-20T11:51:53.301Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.193 Verification LBA range: start 0x0 length 0x400 00:24:20.193 Nvme7n1 : 0.96 591.52 36.97 0.00 0.00 100970.34 10540.37 91750.40 00:24:20.193 [2024-11-20T11:51:53.301Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.193 Verification LBA range: start 0x0 length 0x400 00:24:20.193 Nvme8n1 : 0.96 590.77 36.92 0.00 0.00 100178.01 10758.83 93934.93 00:24:20.193 [2024-11-20T11:51:53.301Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.193 Verification LBA range: start 0x0 length 0x400 00:24:20.193 Nvme9n1 : 0.96 504.35 31.52 0.00 0.00 116417.14 11086.51 214084.27 00:24:20.193 [2024-11-20T11:51:53.301Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.193 Verification LBA range: start 0x0 length 0x400 00:24:20.193 Nvme10n1 : 0.96 370.21 23.14 0.00 0.00 156840.11 9502.72 414187.52 00:24:20.193 [2024-11-20T11:51:53.301Z] =================================================================================================================== 00:24:20.193 [2024-11-20T11:51:53.301Z] Total : 5631.47 351.97 0.00 0.00 107553.34 9065.81 414187.52 00:24:20.454 12:51:53 -- target/shutdown.sh@112 -- # sleep 1 00:24:21.839 12:51:54 -- target/shutdown.sh@113 -- # kill -0 611461 00:24:21.839 12:51:54 -- target/shutdown.sh@115 -- # stoptarget 00:24:21.839 12:51:54 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:21.839 12:51:54 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:21.839 12:51:54 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:21.839 12:51:54 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:21.839 12:51:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:21.839 12:51:54 -- nvmf/common.sh@116 -- # sync 00:24:21.839 12:51:54 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:21.839 12:51:54 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:21.840 12:51:54 -- nvmf/common.sh@119 -- # set +e 00:24:21.840 12:51:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:21.840 12:51:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:21.840 rmmod nvme_rdma 00:24:21.840 rmmod nvme_fabrics 00:24:21.840 12:51:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:21.840 12:51:54 -- nvmf/common.sh@123 -- # set -e 00:24:21.840 12:51:54 -- nvmf/common.sh@124 -- # return 0 00:24:21.840 12:51:54 -- nvmf/common.sh@477 -- # '[' -n 611461 ']' 00:24:21.840 12:51:54 -- nvmf/common.sh@478 -- # killprocess 611461 00:24:21.840 12:51:54 -- common/autotest_common.sh@936 -- # '[' -z 611461 ']' 00:24:21.840 12:51:54 -- common/autotest_common.sh@940 -- # kill -0 611461 00:24:21.840 12:51:54 -- common/autotest_common.sh@941 -- # uname 00:24:21.840 12:51:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:21.840 12:51:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 611461 00:24:21.840 12:51:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:21.840 12:51:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:21.840 12:51:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 611461' 00:24:21.840 killing process with pid 611461 00:24:21.840 12:51:54 -- common/autotest_common.sh@955 -- # kill 611461 00:24:21.840 12:51:54 -- common/autotest_common.sh@960 -- # wait 611461 00:24:21.840 12:51:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:21.840 12:51:54 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:21.840 00:24:21.840 real 0m5.461s 00:24:21.840 user 0m22.353s 00:24:21.840 sys 0m0.952s 00:24:21.840 12:51:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:21.840 12:51:54 -- common/autotest_common.sh@10 -- # set +x 00:24:21.840 ************************************ 00:24:21.840 END TEST nvmf_shutdown_tc2 00:24:21.840 ************************************ 00:24:22.102 12:51:54 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:22.102 12:51:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:22.102 12:51:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:22.102 12:51:54 -- common/autotest_common.sh@10 -- # set +x 00:24:22.102 ************************************ 00:24:22.102 START TEST nvmf_shutdown_tc3 00:24:22.102 ************************************ 00:24:22.102 12:51:54 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc3 00:24:22.102 12:51:54 -- target/shutdown.sh@120 -- # starttarget 00:24:22.102 12:51:54 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:22.102 12:51:54 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:22.102 12:51:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.102 12:51:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:22.102 12:51:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:22.102 12:51:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:22.102 12:51:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.102 12:51:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.102 12:51:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.102 12:51:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:22.102 12:51:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:22.102 12:51:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:22.102 12:51:54 -- common/autotest_common.sh@10 -- # set +x 00:24:22.102 12:51:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:22.102 12:51:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:22.102 12:51:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:22.102 12:51:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:22.102 12:51:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:22.102 12:51:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:22.102 12:51:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:22.102 12:51:55 -- nvmf/common.sh@294 -- # net_devs=() 00:24:22.102 12:51:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:22.102 12:51:55 -- nvmf/common.sh@295 -- # e810=() 00:24:22.102 12:51:55 -- nvmf/common.sh@295 -- # local -ga e810 00:24:22.102 12:51:55 -- nvmf/common.sh@296 -- # x722=() 00:24:22.102 12:51:55 -- nvmf/common.sh@296 -- # local -ga x722 00:24:22.102 12:51:55 -- nvmf/common.sh@297 -- # mlx=() 00:24:22.102 12:51:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:22.102 12:51:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.102 12:51:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:22.102 12:51:55 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:22.102 12:51:55 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:22.102 12:51:55 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:22.102 12:51:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:22.102 12:51:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:22.102 12:51:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:22.102 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:22.102 12:51:55 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:22.102 12:51:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:22.102 12:51:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:22.102 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:22.102 12:51:55 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:22.102 12:51:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:22.102 12:51:55 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:22.102 12:51:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.102 12:51:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:22.102 12:51:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.102 12:51:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:22.102 Found net devices under 0000:98:00.0: mlx_0_0 00:24:22.102 12:51:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.102 12:51:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:22.102 12:51:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.102 12:51:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:22.102 12:51:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.102 12:51:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:22.102 Found net devices under 0000:98:00.1: mlx_0_1 00:24:22.102 12:51:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.102 12:51:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:22.102 12:51:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:22.102 12:51:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:22.102 12:51:55 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:22.102 12:51:55 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:22.102 12:51:55 -- nvmf/common.sh@57 -- # uname 00:24:22.102 12:51:55 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:22.102 12:51:55 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:22.103 12:51:55 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:22.103 12:51:55 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:22.103 12:51:55 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:22.103 12:51:55 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:22.103 12:51:55 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:22.103 12:51:55 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:22.103 12:51:55 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:22.103 12:51:55 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:22.103 12:51:55 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:22.103 12:51:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:22.103 12:51:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:22.103 12:51:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:22.103 12:51:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:22.103 12:51:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:22.103 12:51:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:22.103 12:51:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.103 12:51:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:22.103 12:51:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:22.103 12:51:55 -- nvmf/common.sh@104 -- # continue 2 00:24:22.103 12:51:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:22.103 12:51:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.103 12:51:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:22.103 12:51:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.103 12:51:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:22.103 12:51:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:22.103 12:51:55 -- nvmf/common.sh@104 -- # continue 2 00:24:22.103 12:51:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:22.103 12:51:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:22.103 12:51:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:22.103 12:51:55 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:22.103 12:51:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:22.103 12:51:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:22.103 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:22.103 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:24:22.103 altname enp152s0f0np0 00:24:22.103 altname ens817f0np0 00:24:22.103 inet 192.168.100.8/24 scope global mlx_0_0 00:24:22.103 valid_lft forever preferred_lft forever 00:24:22.103 12:51:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:22.103 12:51:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:22.103 12:51:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:22.103 12:51:55 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:22.103 12:51:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:22.103 12:51:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:22.103 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:22.103 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:24:22.103 altname enp152s0f1np1 00:24:22.103 altname ens817f1np1 00:24:22.103 inet 192.168.100.9/24 scope global mlx_0_1 00:24:22.103 valid_lft forever preferred_lft forever 00:24:22.103 12:51:55 -- nvmf/common.sh@410 -- # return 0 00:24:22.103 12:51:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:22.103 12:51:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:22.103 12:51:55 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:22.103 12:51:55 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:22.103 12:51:55 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:22.103 12:51:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:22.103 12:51:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:22.103 12:51:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:22.103 12:51:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:22.103 12:51:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:22.103 12:51:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:22.103 12:51:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.103 12:51:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:22.103 12:51:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:22.103 12:51:55 -- nvmf/common.sh@104 -- # continue 2 00:24:22.103 12:51:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:22.103 12:51:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.103 12:51:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:22.103 12:51:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.103 12:51:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:22.103 12:51:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:22.103 12:51:55 -- nvmf/common.sh@104 -- # continue 2 00:24:22.103 12:51:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:22.103 12:51:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:22.103 12:51:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:22.103 12:51:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:22.103 12:51:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:22.103 12:51:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:22.103 12:51:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:22.103 12:51:55 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:22.103 192.168.100.9' 00:24:22.103 12:51:55 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:22.103 192.168.100.9' 00:24:22.103 12:51:55 -- nvmf/common.sh@445 -- # head -n 1 00:24:22.103 12:51:55 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:22.103 12:51:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:22.103 192.168.100.9' 00:24:22.103 12:51:55 -- nvmf/common.sh@446 -- # tail -n +2 00:24:22.103 12:51:55 -- nvmf/common.sh@446 -- # head -n 1 00:24:22.103 12:51:55 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:22.364 12:51:55 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:22.364 12:51:55 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:22.364 12:51:55 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:22.364 12:51:55 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:22.364 12:51:55 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:22.364 12:51:55 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:22.364 12:51:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:22.364 12:51:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:22.364 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:24:22.364 12:51:55 -- nvmf/common.sh@469 -- # nvmfpid=612645 00:24:22.364 12:51:55 -- nvmf/common.sh@470 -- # waitforlisten 612645 00:24:22.364 12:51:55 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:22.364 12:51:55 -- common/autotest_common.sh@829 -- # '[' -z 612645 ']' 00:24:22.364 12:51:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.364 12:51:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.364 12:51:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.364 12:51:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.364 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:24:22.364 [2024-11-20 12:51:55.293874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:22.364 [2024-11-20 12:51:55.293940] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.364 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.364 [2024-11-20 12:51:55.378997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:22.364 [2024-11-20 12:51:55.437489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:22.364 [2024-11-20 12:51:55.437584] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.364 [2024-11-20 12:51:55.437590] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.364 [2024-11-20 12:51:55.437595] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.364 [2024-11-20 12:51:55.437724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.364 [2024-11-20 12:51:55.437881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.364 [2024-11-20 12:51:55.438035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.364 [2024-11-20 12:51:55.438037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:23.306 12:51:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.306 12:51:56 -- common/autotest_common.sh@862 -- # return 0 00:24:23.306 12:51:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:23.306 12:51:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:23.306 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:24:23.306 12:51:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.306 12:51:56 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:23.306 12:51:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.306 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:24:23.306 [2024-11-20 12:51:56.154649] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1470ac0/0x1474fb0) succeed. 00:24:23.306 [2024-11-20 12:51:56.165352] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14720b0/0x14b6650) succeed. 00:24:23.306 12:51:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.306 12:51:56 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:23.306 12:51:56 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:23.306 12:51:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.306 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:24:23.306 12:51:56 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:23.306 12:51:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.306 12:51:56 -- target/shutdown.sh@28 -- # cat 00:24:23.306 12:51:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.306 12:51:56 -- target/shutdown.sh@28 -- # cat 00:24:23.306 12:51:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.306 12:51:56 -- target/shutdown.sh@28 -- # cat 00:24:23.306 12:51:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.306 12:51:56 -- target/shutdown.sh@28 -- # cat 00:24:23.306 12:51:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.306 12:51:56 -- target/shutdown.sh@28 -- # cat 00:24:23.306 12:51:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.306 12:51:56 -- target/shutdown.sh@28 -- # cat 00:24:23.306 12:51:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.306 12:51:56 -- target/shutdown.sh@28 -- # cat 00:24:23.306 12:51:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.306 12:51:56 -- target/shutdown.sh@28 -- # cat 00:24:23.306 12:51:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.306 12:51:56 -- target/shutdown.sh@28 -- # cat 00:24:23.306 12:51:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:23.306 12:51:56 -- target/shutdown.sh@28 -- # cat 00:24:23.306 12:51:56 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:23.306 12:51:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.306 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:24:23.306 Malloc1 00:24:23.306 [2024-11-20 12:51:56.359966] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:23.306 Malloc2 00:24:23.567 Malloc3 00:24:23.567 Malloc4 00:24:23.567 Malloc5 00:24:23.567 Malloc6 00:24:23.567 Malloc7 00:24:23.567 Malloc8 00:24:23.567 Malloc9 00:24:23.828 Malloc10 00:24:23.828 12:51:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.828 12:51:56 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:23.828 12:51:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:23.828 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:24:23.828 12:51:56 -- target/shutdown.sh@124 -- # perfpid=613032 00:24:23.828 12:51:56 -- target/shutdown.sh@125 -- # waitforlisten 613032 /var/tmp/bdevperf.sock 00:24:23.828 12:51:56 -- common/autotest_common.sh@829 -- # '[' -z 613032 ']' 00:24:23.828 12:51:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.828 12:51:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.828 12:51:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.828 12:51:56 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:23.828 12:51:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.828 12:51:56 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:23.828 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:24:23.828 12:51:56 -- nvmf/common.sh@520 -- # config=() 00:24:23.828 12:51:56 -- nvmf/common.sh@520 -- # local subsystem config 00:24:23.828 12:51:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:23.828 12:51:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:23.828 { 00:24:23.828 "params": { 00:24:23.828 "name": "Nvme$subsystem", 00:24:23.828 "trtype": "$TEST_TRANSPORT", 00:24:23.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.828 "adrfam": "ipv4", 00:24:23.828 "trsvcid": "$NVMF_PORT", 00:24:23.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.828 "hdgst": ${hdgst:-false}, 00:24:23.828 "ddgst": ${ddgst:-false} 00:24:23.828 }, 00:24:23.828 "method": "bdev_nvme_attach_controller" 00:24:23.828 } 00:24:23.828 EOF 00:24:23.828 )") 00:24:23.828 12:51:56 -- nvmf/common.sh@542 -- # cat 00:24:23.828 12:51:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:23.828 12:51:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:23.828 { 00:24:23.828 "params": { 00:24:23.828 "name": "Nvme$subsystem", 00:24:23.828 "trtype": "$TEST_TRANSPORT", 00:24:23.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.828 "adrfam": "ipv4", 00:24:23.828 "trsvcid": "$NVMF_PORT", 00:24:23.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.828 "hdgst": ${hdgst:-false}, 00:24:23.828 "ddgst": ${ddgst:-false} 00:24:23.828 }, 00:24:23.828 "method": "bdev_nvme_attach_controller" 00:24:23.828 } 00:24:23.828 EOF 00:24:23.828 )") 00:24:23.828 12:51:56 -- nvmf/common.sh@542 -- # cat 00:24:23.828 12:51:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:23.828 12:51:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:23.828 { 00:24:23.828 "params": { 00:24:23.828 "name": "Nvme$subsystem", 00:24:23.828 "trtype": "$TEST_TRANSPORT", 00:24:23.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.828 "adrfam": "ipv4", 00:24:23.828 "trsvcid": "$NVMF_PORT", 00:24:23.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.828 "hdgst": ${hdgst:-false}, 00:24:23.828 "ddgst": ${ddgst:-false} 00:24:23.828 }, 00:24:23.828 "method": "bdev_nvme_attach_controller" 00:24:23.828 } 00:24:23.828 EOF 00:24:23.828 )") 00:24:23.828 12:51:56 -- nvmf/common.sh@542 -- # cat 00:24:23.829 12:51:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:23.829 { 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme$subsystem", 00:24:23.829 "trtype": "$TEST_TRANSPORT", 00:24:23.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "$NVMF_PORT", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.829 "hdgst": ${hdgst:-false}, 00:24:23.829 "ddgst": ${ddgst:-false} 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 } 00:24:23.829 EOF 00:24:23.829 )") 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # cat 00:24:23.829 12:51:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:23.829 { 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme$subsystem", 00:24:23.829 "trtype": "$TEST_TRANSPORT", 00:24:23.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "$NVMF_PORT", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.829 "hdgst": ${hdgst:-false}, 00:24:23.829 "ddgst": ${ddgst:-false} 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 } 00:24:23.829 EOF 00:24:23.829 )") 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # cat 00:24:23.829 12:51:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:23.829 { 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme$subsystem", 00:24:23.829 "trtype": "$TEST_TRANSPORT", 00:24:23.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "$NVMF_PORT", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.829 "hdgst": ${hdgst:-false}, 00:24:23.829 "ddgst": ${ddgst:-false} 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 } 00:24:23.829 EOF 00:24:23.829 )") 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # cat 00:24:23.829 12:51:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:23.829 { 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme$subsystem", 00:24:23.829 "trtype": "$TEST_TRANSPORT", 00:24:23.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "$NVMF_PORT", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.829 "hdgst": ${hdgst:-false}, 00:24:23.829 "ddgst": ${ddgst:-false} 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 } 00:24:23.829 EOF 00:24:23.829 )") 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # cat 00:24:23.829 [2024-11-20 12:51:56.807851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:23.829 [2024-11-20 12:51:56.807935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613032 ] 00:24:23.829 12:51:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:23.829 { 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme$subsystem", 00:24:23.829 "trtype": "$TEST_TRANSPORT", 00:24:23.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "$NVMF_PORT", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.829 "hdgst": ${hdgst:-false}, 00:24:23.829 "ddgst": ${ddgst:-false} 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 } 00:24:23.829 EOF 00:24:23.829 )") 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # cat 00:24:23.829 12:51:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:23.829 { 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme$subsystem", 00:24:23.829 "trtype": "$TEST_TRANSPORT", 00:24:23.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "$NVMF_PORT", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.829 "hdgst": ${hdgst:-false}, 00:24:23.829 "ddgst": ${ddgst:-false} 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 } 00:24:23.829 EOF 00:24:23.829 )") 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # cat 00:24:23.829 12:51:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:23.829 { 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme$subsystem", 00:24:23.829 "trtype": "$TEST_TRANSPORT", 00:24:23.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "$NVMF_PORT", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.829 "hdgst": ${hdgst:-false}, 00:24:23.829 "ddgst": ${ddgst:-false} 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 } 00:24:23.829 EOF 00:24:23.829 )") 00:24:23.829 12:51:56 -- nvmf/common.sh@542 -- # cat 00:24:23.829 12:51:56 -- nvmf/common.sh@544 -- # jq . 00:24:23.829 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.829 12:51:56 -- nvmf/common.sh@545 -- # IFS=, 00:24:23.829 12:51:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme1", 00:24:23.829 "trtype": "rdma", 00:24:23.829 "traddr": "192.168.100.8", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "4420", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.829 "hdgst": false, 00:24:23.829 "ddgst": false 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 },{ 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme2", 00:24:23.829 "trtype": "rdma", 00:24:23.829 "traddr": "192.168.100.8", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "4420", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:23.829 "hdgst": false, 00:24:23.829 "ddgst": false 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 },{ 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme3", 00:24:23.829 "trtype": "rdma", 00:24:23.829 "traddr": "192.168.100.8", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "4420", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:23.829 "hdgst": false, 00:24:23.829 "ddgst": false 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 },{ 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme4", 00:24:23.829 "trtype": "rdma", 00:24:23.829 "traddr": "192.168.100.8", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "4420", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:23.829 "hdgst": false, 00:24:23.829 "ddgst": false 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 },{ 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme5", 00:24:23.829 "trtype": "rdma", 00:24:23.829 "traddr": "192.168.100.8", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "4420", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:23.829 "hdgst": false, 00:24:23.829 "ddgst": false 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 },{ 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme6", 00:24:23.829 "trtype": "rdma", 00:24:23.829 "traddr": "192.168.100.8", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "4420", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:23.829 "hdgst": false, 00:24:23.829 "ddgst": false 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 },{ 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme7", 00:24:23.829 "trtype": "rdma", 00:24:23.829 "traddr": "192.168.100.8", 00:24:23.829 "adrfam": "ipv4", 00:24:23.829 "trsvcid": "4420", 00:24:23.829 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:23.829 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:23.829 "hdgst": false, 00:24:23.829 "ddgst": false 00:24:23.829 }, 00:24:23.829 "method": "bdev_nvme_attach_controller" 00:24:23.829 },{ 00:24:23.829 "params": { 00:24:23.829 "name": "Nvme8", 00:24:23.829 "trtype": "rdma", 00:24:23.830 "traddr": "192.168.100.8", 00:24:23.830 "adrfam": "ipv4", 00:24:23.830 "trsvcid": "4420", 00:24:23.830 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:23.830 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:23.830 "hdgst": false, 00:24:23.830 "ddgst": false 00:24:23.830 }, 00:24:23.830 "method": "bdev_nvme_attach_controller" 00:24:23.830 },{ 00:24:23.830 "params": { 00:24:23.830 "name": "Nvme9", 00:24:23.830 "trtype": "rdma", 00:24:23.830 "traddr": "192.168.100.8", 00:24:23.830 "adrfam": "ipv4", 00:24:23.830 "trsvcid": "4420", 00:24:23.830 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:23.830 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:23.830 "hdgst": false, 00:24:23.830 "ddgst": false 00:24:23.830 }, 00:24:23.830 "method": "bdev_nvme_attach_controller" 00:24:23.830 },{ 00:24:23.830 "params": { 00:24:23.830 "name": "Nvme10", 00:24:23.830 "trtype": "rdma", 00:24:23.830 "traddr": "192.168.100.8", 00:24:23.830 "adrfam": "ipv4", 00:24:23.830 "trsvcid": "4420", 00:24:23.830 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:23.830 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:23.830 "hdgst": false, 00:24:23.830 "ddgst": false 00:24:23.830 }, 00:24:23.830 "method": "bdev_nvme_attach_controller" 00:24:23.830 }' 00:24:23.830 [2024-11-20 12:51:56.872226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.090 [2024-11-20 12:51:56.935966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.031 Running I/O for 10 seconds... 00:24:25.291 12:51:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.291 12:51:58 -- common/autotest_common.sh@862 -- # return 0 00:24:25.291 12:51:58 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:25.291 12:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.291 12:51:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.551 12:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.551 12:51:58 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.551 12:51:58 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:25.551 12:51:58 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:25.551 12:51:58 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:25.551 12:51:58 -- target/shutdown.sh@57 -- # local ret=1 00:24:25.551 12:51:58 -- target/shutdown.sh@58 -- # local i 00:24:25.551 12:51:58 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:25.551 12:51:58 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:25.551 12:51:58 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:25.551 12:51:58 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:25.551 12:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.551 12:51:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.551 12:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.551 12:51:58 -- target/shutdown.sh@60 -- # read_io_count=369 00:24:25.551 12:51:58 -- target/shutdown.sh@63 -- # '[' 369 -ge 100 ']' 00:24:25.551 12:51:58 -- target/shutdown.sh@64 -- # ret=0 00:24:25.551 12:51:58 -- target/shutdown.sh@65 -- # break 00:24:25.551 12:51:58 -- target/shutdown.sh@69 -- # return 0 00:24:25.551 12:51:58 -- target/shutdown.sh@134 -- # killprocess 612645 00:24:25.551 12:51:58 -- common/autotest_common.sh@936 -- # '[' -z 612645 ']' 00:24:25.551 12:51:58 -- common/autotest_common.sh@940 -- # kill -0 612645 00:24:25.551 12:51:58 -- common/autotest_common.sh@941 -- # uname 00:24:25.551 12:51:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:25.551 12:51:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 612645 00:24:25.811 12:51:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:25.812 12:51:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:25.812 12:51:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 612645' 00:24:25.812 killing process with pid 612645 00:24:25.812 12:51:58 -- common/autotest_common.sh@955 -- # kill 612645 00:24:25.812 12:51:58 -- common/autotest_common.sh@960 -- # wait 612645 00:24:26.072 12:51:59 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:26.072 12:51:59 -- target/shutdown.sh@138 -- # sleep 1 00:24:26.647 [2024-11-20 12:51:59.747207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.647 [2024-11-20 12:51:59.747247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:b1ba44d0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.647 [2024-11-20 12:51:59.747259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.647 [2024-11-20 12:51:59.747267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:b1ba44d0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.647 [2024-11-20 12:51:59.747275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.647 [2024-11-20 12:51:59.747283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:b1ba44d0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.648 [2024-11-20 12:51:59.747291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.648 [2024-11-20 12:51:59.747298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:b1ba44d0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.648 [2024-11-20 12:51:59.749654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.648 [2024-11-20 12:51:59.749674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:26.648 [2024-11-20 12:51:59.749696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.648 [2024-11-20 12:51:59.749705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.648 [2024-11-20 12:51:59.749713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.648 [2024-11-20 12:51:59.749721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.648 [2024-11-20 12:51:59.749729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.648 [2024-11-20 12:51:59.749736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.648 [2024-11-20 12:51:59.749744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.648 [2024-11-20 12:51:59.749751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.648 [2024-11-20 12:51:59.752228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.648 [2024-11-20 12:51:59.752239] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:26.648 [2024-11-20 12:51:59.752254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.648 [2024-11-20 12:51:59.752261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.648 [2024-11-20 12:51:59.752270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.648 [2024-11-20 12:51:59.752277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.648 [2024-11-20 12:51:59.752285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.648 [2024-11-20 12:51:59.752292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.648 [2024-11-20 12:51:59.752300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.648 [2024-11-20 12:51:59.752307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.754777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.922 [2024-11-20 12:51:59.754788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:26.922 [2024-11-20 12:51:59.754802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.754809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.754817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.754824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.754832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.754843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.754851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.754858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.757430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.922 [2024-11-20 12:51:59.757441] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.922 [2024-11-20 12:51:59.757454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.757462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.757470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.757477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.757485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.757492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.757500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.757507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.760022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.922 [2024-11-20 12:51:59.760033] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:26.922 [2024-11-20 12:51:59.760046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.760054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.760062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.760069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.760076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.760083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.760091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.760097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.762673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.922 [2024-11-20 12:51:59.762684] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:26.922 [2024-11-20 12:51:59.762697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.762710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.762718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.762725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.762733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.762740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.762747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.762754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.764890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.922 [2024-11-20 12:51:59.764901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:26.922 [2024-11-20 12:51:59.764914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.764922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.764929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.764936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.764944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.764951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.764959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.764965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.767573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.922 [2024-11-20 12:51:59.767583] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:26.922 [2024-11-20 12:51:59.767597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.922 [2024-11-20 12:51:59.767605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.922 [2024-11-20 12:51:59.767613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.923 [2024-11-20 12:51:59.767619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.767627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.923 [2024-11-20 12:51:59.767634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.767645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.923 [2024-11-20 12:51:59.767651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.770043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.923 [2024-11-20 12:51:59.770054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:26.923 [2024-11-20 12:51:59.770067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.923 [2024-11-20 12:51:59.770074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.770082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.923 [2024-11-20 12:51:59.770089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.770097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.923 [2024-11-20 12:51:59.770103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.770111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.923 [2024-11-20 12:51:59.770118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:38140 cdw0:b1ba44d0 sqhd:d900 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.923 [2024-11-20 12:51:59.772506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:26.923 [2024-11-20 12:51:59.772521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x181d00 00:24:26.923 [2024-11-20 12:51:59.772530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x181d00 00:24:26.923 [2024-11-20 12:51:59.772570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019243e80 len:0x10000 key:0x182900 00:24:26.923 [2024-11-20 12:51:59.772590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x181500 00:24:26.923 [2024-11-20 12:51:59.772611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x183f00 00:24:26.923 [2024-11-20 12:51:59.772631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x180c00 00:24:26.923 [2024-11-20 12:51:59.772654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x180c00 00:24:26.923 [2024-11-20 12:51:59.772674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x183f00 00:24:26.923 [2024-11-20 12:51:59.772694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x180c00 00:24:26.923 [2024-11-20 12:51:59.772713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x181d00 00:24:26.923 [2024-11-20 12:51:59.772733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x180c00 00:24:26.923 [2024-11-20 12:51:59.772753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x180c00 00:24:26.923 [2024-11-20 12:51:59.772772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x183f00 00:24:26.923 [2024-11-20 12:51:59.772791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x180c00 00:24:26.923 [2024-11-20 12:51:59.772811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x181d00 00:24:26.923 [2024-11-20 12:51:59.772830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x181500 00:24:26.923 [2024-11-20 12:51:59.772849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x181d00 00:24:26.923 [2024-11-20 12:51:59.772870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003edf040 len:0x10000 key:0x183e00 00:24:26.923 [2024-11-20 12:51:59.772890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019260e00 len:0x10000 key:0x182900 00:24:26.923 [2024-11-20 12:51:59.772909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x181d00 00:24:26.923 [2024-11-20 12:51:59.772929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x181500 00:24:26.923 [2024-11-20 12:51:59.772948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x181500 00:24:26.923 [2024-11-20 12:51:59.772968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.772980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x181d00 00:24:26.923 [2024-11-20 12:51:59.772994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.773006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003eaeec0 len:0x10000 key:0x183e00 00:24:26.923 [2024-11-20 12:51:59.773014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.923 [2024-11-20 12:51:59.773026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e6ecc0 len:0x10000 key:0x183e00 00:24:26.923 [2024-11-20 12:51:59.773033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x181500 00:24:26.924 [2024-11-20 12:51:59.773053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x183f00 00:24:26.924 [2024-11-20 12:51:59.773072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x181d00 00:24:26.924 [2024-11-20 12:51:59.773095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x181d00 00:24:26.924 [2024-11-20 12:51:59.773115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003ebef40 len:0x10000 key:0x183e00 00:24:26.924 [2024-11-20 12:51:59.773134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x180c00 00:24:26.924 [2024-11-20 12:51:59.773154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x180c00 00:24:26.924 [2024-11-20 12:51:59.773173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x181d00 00:24:26.924 [2024-11-20 12:51:59.773192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x180c00 00:24:26.924 [2024-11-20 12:51:59.773212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x180c00 00:24:26.924 [2024-11-20 12:51:59.773231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x181d00 00:24:26.924 [2024-11-20 12:51:59.773251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x183f00 00:24:26.924 [2024-11-20 12:51:59.773271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x181d00 00:24:26.924 [2024-11-20 12:51:59.773294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x180c00 00:24:26.924 [2024-11-20 12:51:59.773313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001184a000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011829000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010158000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001290c000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128eb000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128ca000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c735000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c714000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6f3000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6d2000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6b1000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c690000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db0f000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daee000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dacd000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daac000 len:0x10000 key:0x184400 00:24:26.924 [2024-11-20 12:51:59.773637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.924 [2024-11-20 12:51:59.773650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010599000 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.773657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.773670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105ba000 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.773677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.773690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105db000 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.773697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.773710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105fc000 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.773717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.773730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.773737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.773750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.773757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.773769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.773779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.773791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09a000 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.773799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.773811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e079000 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.773818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777198] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257100 was disconnected and freed. reset controller. 00:24:26.925 [2024-11-20 12:51:59.777213] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.925 [2024-11-20 12:51:59.777225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000709f500 len:0x10000 key:0x183700 00:24:26.925 [2024-11-20 12:51:59.777233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x183700 00:24:26.925 [2024-11-20 12:51:59.777255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ff800 len:0x10000 key:0x183700 00:24:26.925 [2024-11-20 12:51:59.777275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002372c0 len:0x10000 key:0x183b00 00:24:26.925 [2024-11-20 12:51:59.777295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b29fb00 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.777315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b27fa00 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.777335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e1ea40 len:0x10000 key:0x183e00 00:24:26.925 [2024-11-20 12:51:59.777354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070af580 len:0x10000 key:0x183700 00:24:26.925 [2024-11-20 12:51:59.777374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002c7740 len:0x10000 key:0x183b00 00:24:26.925 [2024-11-20 12:51:59.777397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000227240 len:0x10000 key:0x183b00 00:24:26.925 [2024-11-20 12:51:59.777416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002b76c0 len:0x10000 key:0x183b00 00:24:26.925 [2024-11-20 12:51:59.777436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b24f880 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.777455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071afd80 len:0x10000 key:0x183700 00:24:26.925 [2024-11-20 12:51:59.777475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2dfd00 len:0x10000 key:0x184400 00:24:26.925 [2024-11-20 12:51:59.777495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e2eac0 len:0x10000 key:0x183e00 00:24:26.925 [2024-11-20 12:51:59.777515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070cf680 len:0x10000 key:0x183700 00:24:26.925 [2024-11-20 12:51:59.777535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000267440 len:0x10000 key:0x183b00 00:24:26.925 [2024-11-20 12:51:59.777554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ef780 len:0x10000 key:0x183700 00:24:26.925 [2024-11-20 12:51:59.777573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e0e9c0 len:0x10000 key:0x183e00 00:24:26.925 [2024-11-20 12:51:59.777594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000704f280 len:0x10000 key:0x183700 00:24:26.925 [2024-11-20 12:51:59.777615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000719fd00 len:0x10000 key:0x183700 00:24:26.925 [2024-11-20 12:51:59.777635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.925 [2024-11-20 12:51:59.777647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000716fb80 len:0x10000 key:0x183700 00:24:26.926 [2024-11-20 12:51:59.777654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b22f780 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.777674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071dff00 len:0x10000 key:0x183700 00:24:26.926 [2024-11-20 12:51:59.777694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070df700 len:0x10000 key:0x183700 00:24:26.926 [2024-11-20 12:51:59.777713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000712f980 len:0x10000 key:0x183700 00:24:26.926 [2024-11-20 12:51:59.777733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2efd80 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.777752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000701f100 len:0x10000 key:0x183700 00:24:26.926 [2024-11-20 12:51:59.777771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2cfc80 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.777791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070bf600 len:0x10000 key:0x183700 00:24:26.926 [2024-11-20 12:51:59.777810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b28fa80 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.777831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000706f380 len:0x10000 key:0x183700 00:24:26.926 [2024-11-20 12:51:59.777850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000707f400 len:0x10000 key:0x183700 00:24:26.926 [2024-11-20 12:51:59.777869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b20f680 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.777889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e4ebc0 len:0x10000 key:0x183e00 00:24:26.926 [2024-11-20 12:51:59.777908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x183b00 00:24:26.926 [2024-11-20 12:51:59.777927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000713fa00 len:0x10000 key:0x183700 00:24:26.926 [2024-11-20 12:51:59.777946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b25f900 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.777965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.777978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x183b00 00:24:26.926 [2024-11-20 12:51:59.777993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b26f980 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000711f900 len:0x10000 key:0x183700 00:24:26.926 [2024-11-20 12:51:59.778033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002975c0 len:0x10000 key:0x183b00 00:24:26.926 [2024-11-20 12:51:59.778054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000247340 len:0x10000 key:0x183b00 00:24:26.926 [2024-11-20 12:51:59.778073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c315000 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124aa000 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124cb000 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124ec000 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131af000 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c903000 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8e2000 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8c1000 len:0x10000 key:0x184400 00:24:26.926 [2024-11-20 12:51:59.778275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.926 [2024-11-20 12:51:59.778289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8a0000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.778310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd1f000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.778329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcfe000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.778350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcdd000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.778369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcbc000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.778389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc9b000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.778409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc38000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.778429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc17000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.778449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbf6000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.778469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbd5000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.778489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbb4000 len:0x10000 key:0x184400 00:24:26.927 [2024-11-20 12:51:59.778497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.781811] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256ec0 was disconnected and freed. reset controller. 00:24:26.927 [2024-11-20 12:51:59.781824] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.927 [2024-11-20 12:51:59.781835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x183600 00:24:26.927 [2024-11-20 12:51:59.781843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.781862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x183200 00:24:26.927 [2024-11-20 12:51:59.781871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.781883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000052f800 len:0x10000 key:0x183600 00:24:26.927 [2024-11-20 12:51:59.781890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.781902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x183200 00:24:26.927 [2024-11-20 12:51:59.781910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.781922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182a00 00:24:26.927 [2024-11-20 12:51:59.781929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.781942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x182a00 00:24:26.927 [2024-11-20 12:51:59.781949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.781962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x182a00 00:24:26.927 [2024-11-20 12:51:59.781969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.782025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ff880 len:0x10000 key:0x182a00 00:24:26.927 [2024-11-20 12:51:59.782034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.782046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000058fb00 len:0x10000 key:0x183600 00:24:26.927 [2024-11-20 12:51:59.782053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.782066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000089ef00 len:0x10000 key:0x183200 00:24:26.927 [2024-11-20 12:51:59.782074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.782086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f980 len:0x10000 key:0x183600 00:24:26.927 [2024-11-20 12:51:59.782098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.782111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004af400 len:0x10000 key:0x183600 00:24:26.927 [2024-11-20 12:51:59.782119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.927 [2024-11-20 12:51:59.782131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x183600 00:24:26.927 [2024-11-20 12:51:59.782138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005cfd00 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048f300 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000044f100 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008df100 len:0x10000 key:0x183200 00:24:26.928 [2024-11-20 12:51:59.782236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000047f280 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f900 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001954fb00 len:0x10000 key:0x182a00 00:24:26.928 [2024-11-20 12:51:59.782296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043f080 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000087ee00 len:0x10000 key:0x183200 00:24:26.928 [2024-11-20 12:51:59.782336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000041ef80 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000042f000 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004cf500 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195cff00 len:0x10000 key:0x182a00 00:24:26.928 [2024-11-20 12:51:59.782434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x182a00 00:24:26.928 [2024-11-20 12:51:59.782453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000050f700 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000046f200 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182a00 00:24:26.928 [2024-11-20 12:51:59.782512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df580 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057fa80 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000059fb80 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ff680 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195afe00 len:0x10000 key:0x182a00 00:24:26.928 [2024-11-20 12:51:59.782632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008aef80 len:0x10000 key:0x183200 00:24:26.928 [2024-11-20 12:51:59.782651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005bfc80 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000053f880 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000051f780 len:0x10000 key:0x183600 00:24:26.928 [2024-11-20 12:51:59.782710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.928 [2024-11-20 12:51:59.782722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x183200 00:24:26.928 [2024-11-20 12:51:59.782729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121d4000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010a7f000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e97f000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e95e000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cc9000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ca8000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c399000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c378000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c357000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000caf2000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cad1000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cab0000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.782987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4ba000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.782997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.783009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4db000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.783017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.783029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4fc000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.783037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.783049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e51d000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.783057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.783069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001082d000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.783077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.783089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001080c000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.783096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.783108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000107eb000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.783116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.783129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000107ca000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.783136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.783148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000107a9000 len:0x10000 key:0x184400 00:24:26.929 [2024-11-20 12:51:59.783156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.786508] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256c80 was disconnected and freed. reset controller. 00:24:26.929 [2024-11-20 12:51:59.786521] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.929 [2024-11-20 12:51:59.786532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x182c00 00:24:26.929 [2024-11-20 12:51:59.786540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.786553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f780 len:0x10000 key:0x182b00 00:24:26.929 [2024-11-20 12:51:59.786561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.786576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x182a00 00:24:26.929 [2024-11-20 12:51:59.786584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.786597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x182b00 00:24:26.929 [2024-11-20 12:51:59.786604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.786617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001981f180 len:0x10000 key:0x182c00 00:24:26.929 [2024-11-20 12:51:59.786625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.786637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x182c00 00:24:26.929 [2024-11-20 12:51:59.786644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.786657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984f300 len:0x10000 key:0x182c00 00:24:26.929 [2024-11-20 12:51:59.786664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.929 [2024-11-20 12:51:59.786677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x182d00 00:24:26.930 [2024-11-20 12:51:59.786685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001946f400 len:0x10000 key:0x182a00 00:24:26.930 [2024-11-20 12:51:59.786704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x182b00 00:24:26.930 [2024-11-20 12:51:59.786726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x182a00 00:24:26.930 [2024-11-20 12:51:59.786746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001997fc80 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.786765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x182a00 00:24:26.930 [2024-11-20 12:51:59.786786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194af600 len:0x10000 key:0x182a00 00:24:26.930 [2024-11-20 12:51:59.786807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.786827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.786847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.786866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x182b00 00:24:26.930 [2024-11-20 12:51:59.786886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.786905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001942f200 len:0x10000 key:0x182a00 00:24:26.930 [2024-11-20 12:51:59.786924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.786944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f900 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.786963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.786975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x182b00 00:24:26.930 [2024-11-20 12:51:59.786989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x182a00 00:24:26.930 [2024-11-20 12:51:59.787049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001980f100 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff80 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993fa80 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983f280 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afe00 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001945f380 len:0x10000 key:0x182a00 00:24:26.930 [2024-11-20 12:51:59.787204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x182a00 00:24:26.930 [2024-11-20 12:51:59.787224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001998fd00 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x182c00 00:24:26.930 [2024-11-20 12:51:59.787284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.930 [2024-11-20 12:51:59.787296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x182b00 00:24:26.930 [2024-11-20 12:51:59.787304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x182a00 00:24:26.931 [2024-11-20 12:51:59.787323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001941f180 len:0x10000 key:0x182a00 00:24:26.931 [2024-11-20 12:51:59.787343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x182c00 00:24:26.931 [2024-11-20 12:51:59.787363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x182b00 00:24:26.931 [2024-11-20 12:51:59.787382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123e4000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c8f000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed9f000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed7e000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ed9000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012eb8000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5a9000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c567000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cce1000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ccc0000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e8b9000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e8da000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.787644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.787656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e8fb000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.793619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.793664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e91c000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.793674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.793692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e93d000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.793700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.793713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010a5e000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.793721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.793733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010a3d000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.793741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.793754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010a1c000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.793761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.793774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000109fb000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.793782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.793795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000109da000 len:0x10000 key:0x184400 00:24:26.931 [2024-11-20 12:51:59.793802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.797311] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:24:26.931 [2024-11-20 12:51:59.797338] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.931 [2024-11-20 12:51:59.797356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182e00 00:24:26.931 [2024-11-20 12:51:59.797365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.797387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182e00 00:24:26.931 [2024-11-20 12:51:59.797396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.797409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf780 len:0x10000 key:0x182e00 00:24:26.931 [2024-11-20 12:51:59.797416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.797429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x182d00 00:24:26.931 [2024-11-20 12:51:59.797437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.797449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182f00 00:24:26.931 [2024-11-20 12:51:59.797460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.931 [2024-11-20 12:51:59.797473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182f00 00:24:26.931 [2024-11-20 12:51:59.797480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182f00 00:24:26.932 [2024-11-20 12:51:59.797500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eaf600 len:0x10000 key:0x182f00 00:24:26.932 [2024-11-20 12:51:59.797520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a4f900 len:0x10000 key:0x182d00 00:24:26.932 [2024-11-20 12:51:59.797559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ff0000 len:0x10000 key:0x182f00 00:24:26.932 [2024-11-20 12:51:59.797699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a8fb00 len:0x10000 key:0x182d00 00:24:26.932 [2024-11-20 12:51:59.797718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2f200 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eff880 len:0x10000 key:0x182f00 00:24:26.932 [2024-11-20 12:51:59.797776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x182f00 00:24:26.932 [2024-11-20 12:51:59.797796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a2f800 len:0x10000 key:0x182d00 00:24:26.932 [2024-11-20 12:51:59.797815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfe80 len:0x10000 key:0x182f00 00:24:26.932 [2024-11-20 12:51:59.797835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcff00 len:0x10000 key:0x182f00 00:24:26.932 [2024-11-20 12:51:59.797855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7f480 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x182f00 00:24:26.932 [2024-11-20 12:51:59.797915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182f00 00:24:26.932 [2024-11-20 12:51:59.797934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.797967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c1f180 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.797974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.798008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182f00 00:24:26.932 [2024-11-20 12:51:59.798017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.798029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182e00 00:24:26.932 [2024-11-20 12:51:59.798037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.932 [2024-11-20 12:51:59.798049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182e00 00:24:26.933 [2024-11-20 12:51:59.798056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182e00 00:24:26.933 [2024-11-20 12:51:59.798076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182e00 00:24:26.933 [2024-11-20 12:51:59.798095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019caf600 len:0x10000 key:0x182e00 00:24:26.933 [2024-11-20 12:51:59.798115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182f00 00:24:26.933 [2024-11-20 12:51:59.798134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a5f980 len:0x10000 key:0x182d00 00:24:26.933 [2024-11-20 12:51:59.798155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6fc00 len:0x10000 key:0x182e00 00:24:26.933 [2024-11-20 12:51:59.798175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182e00 00:24:26.933 [2024-11-20 12:51:59.798194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182e00 00:24:26.933 [2024-11-20 12:51:59.798214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:24:26.933 [2024-11-20 12:51:59.798234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125f4000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e9f000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130e9000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130c8000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7b9000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c798000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c777000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cef1000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ced0000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecd9000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecfa000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed1b000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed3c000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed5d000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c6e000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.933 [2024-11-20 12:51:59.798592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c4d000 len:0x10000 key:0x184400 00:24:26.933 [2024-11-20 12:51:59.798599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.798613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c2c000 len:0x10000 key:0x184400 00:24:26.934 [2024-11-20 12:51:59.798621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.798633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c0b000 len:0x10000 key:0x184400 00:24:26.934 [2024-11-20 12:51:59.798641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.798653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bea000 len:0x10000 key:0x184400 00:24:26.934 [2024-11-20 12:51:59.798661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802070] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256800 was disconnected and freed. reset controller. 00:24:26.934 [2024-11-20 12:51:59.802137] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.934 [2024-11-20 12:51:59.802186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.802214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183000 00:24:26.934 [2024-11-20 12:51:59.802317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.802376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183000 00:24:26.934 [2024-11-20 12:51:59.802435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183300 00:24:26.934 [2024-11-20 12:51:59.802492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.802550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x183300 00:24:26.934 [2024-11-20 12:51:59.802608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x183300 00:24:26.934 [2024-11-20 12:51:59.802676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182f00 00:24:26.934 [2024-11-20 12:51:59.802733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a08fb00 len:0x10000 key:0x183000 00:24:26.934 [2024-11-20 12:51:59.802792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.802849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.802907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182f00 00:24:26.934 [2024-11-20 12:51:59.802964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.802988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:24:26.934 [2024-11-20 12:51:59.802996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.803016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.803035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.803054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0cfd00 len:0x10000 key:0x183000 00:24:26.934 [2024-11-20 12:51:59.803074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.803095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.803114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x183300 00:24:26.934 [2024-11-20 12:51:59.803134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.803153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a06fa00 len:0x10000 key:0x183000 00:24:26.934 [2024-11-20 12:51:59.803173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.803192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x184300 00:24:26.934 [2024-11-20 12:51:59.803212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.934 [2024-11-20 12:51:59.803224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:24:26.935 [2024-11-20 12:51:59.803231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x184300 00:24:26.935 [2024-11-20 12:51:59.803251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x184300 00:24:26.935 [2024-11-20 12:51:59.803270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183300 00:24:26.935 [2024-11-20 12:51:59.803289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a38fd00 len:0x10000 key:0x184300 00:24:26.935 [2024-11-20 12:51:59.803311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x184300 00:24:26.935 [2024-11-20 12:51:59.803330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183300 00:24:26.935 [2024-11-20 12:51:59.803350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x184300 00:24:26.935 [2024-11-20 12:51:59.803369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:24:26.935 [2024-11-20 12:51:59.803388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x182f00 00:24:26.935 [2024-11-20 12:51:59.803408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x184300 00:24:26.935 [2024-11-20 12:51:59.803427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a37fc80 len:0x10000 key:0x184300 00:24:26.935 [2024-11-20 12:51:59.803447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x184300 00:24:26.935 [2024-11-20 12:51:59.803466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:24:26.935 [2024-11-20 12:51:59.803485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4f300 len:0x10000 key:0x182f00 00:24:26.935 [2024-11-20 12:51:59.803504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfe80 len:0x10000 key:0x184300 00:24:26.935 [2024-11-20 12:51:59.803523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x184300 00:24:26.935 [2024-11-20 12:51:59.803544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183000 00:24:26.935 [2024-11-20 12:51:59.803564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012804000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110af000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132f9000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132d8000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d101000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.935 [2024-11-20 12:51:59.803781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0e0000 len:0x10000 key:0x184400 00:24:26.935 [2024-11-20 12:51:59.803789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.803802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0f9000 len:0x10000 key:0x184400 00:24:26.936 [2024-11-20 12:51:59.803810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.803822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f11a000 len:0x10000 key:0x184400 00:24:26.936 [2024-11-20 12:51:59.803829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.803842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f13b000 len:0x10000 key:0x184400 00:24:26.936 [2024-11-20 12:51:59.803849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.803862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f15c000 len:0x10000 key:0x184400 00:24:26.936 [2024-11-20 12:51:59.803869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.803883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f17d000 len:0x10000 key:0x184400 00:24:26.936 [2024-11-20 12:51:59.803891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.803903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e7e000 len:0x10000 key:0x184400 00:24:26.936 [2024-11-20 12:51:59.803910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.803923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e5d000 len:0x10000 key:0x184400 00:24:26.936 [2024-11-20 12:51:59.803931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.803943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e3c000 len:0x10000 key:0x184400 00:24:26.936 [2024-11-20 12:51:59.803950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.803963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e1b000 len:0x10000 key:0x184400 00:24:26.936 [2024-11-20 12:51:59.803970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.803989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010dfa000 len:0x10000 key:0x184400 00:24:26.936 [2024-11-20 12:51:59.803999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807395] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192565c0 was disconnected and freed. reset controller. 00:24:26.936 [2024-11-20 12:51:59.807409] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.936 [2024-11-20 12:51:59.807421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x184200 00:24:26.936 [2024-11-20 12:51:59.807429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x184200 00:24:26.936 [2024-11-20 12:51:59.807451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x184200 00:24:26.936 [2024-11-20 12:51:59.807470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183300 00:24:26.936 [2024-11-20 12:51:59.807489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183d00 00:24:26.936 [2024-11-20 12:51:59.807508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183d00 00:24:26.936 [2024-11-20 12:51:59.807528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183d00 00:24:26.936 [2024-11-20 12:51:59.807547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183d00 00:24:26.936 [2024-11-20 12:51:59.807566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x184200 00:24:26.936 [2024-11-20 12:51:59.807585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7f0000 len:0x10000 key:0x184200 00:24:26.936 [2024-11-20 12:51:59.807605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x184200 00:24:26.936 [2024-11-20 12:51:59.807627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x184200 00:24:26.936 [2024-11-20 12:51:59.807647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x184200 00:24:26.936 [2024-11-20 12:51:59.807667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x184200 00:24:26.936 [2024-11-20 12:51:59.807686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183d00 00:24:26.936 [2024-11-20 12:51:59.807706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183d00 00:24:26.936 [2024-11-20 12:51:59.807725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183d00 00:24:26.936 [2024-11-20 12:51:59.807745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a43f880 len:0x10000 key:0x183300 00:24:26.936 [2024-11-20 12:51:59.807765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183d00 00:24:26.936 [2024-11-20 12:51:59.807784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x184200 00:24:26.936 [2024-11-20 12:51:59.807804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.936 [2024-11-20 12:51:59.807816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183d00 00:24:26.936 [2024-11-20 12:51:59.807823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.807837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183d00 00:24:26.937 [2024-11-20 12:51:59.807845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.807857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7cff00 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.807864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.807876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183d00 00:24:26.937 [2024-11-20 12:51:59.807883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.807895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183d00 00:24:26.937 [2024-11-20 12:51:59.807903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.807915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.807922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.807934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.807941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.807954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183d00 00:24:26.937 [2024-11-20 12:51:59.807961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.807973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183d00 00:24:26.937 [2024-11-20 12:51:59.807981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.807998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66f400 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.808006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183d00 00:24:26.937 [2024-11-20 12:51:59.808025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183d00 00:24:26.937 [2024-11-20 12:51:59.808045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.808066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.808086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.808105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.808124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65f380 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.808144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183d00 00:24:26.937 [2024-11-20 12:51:59.808163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x183300 00:24:26.937 [2024-11-20 12:51:59.808182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a71f980 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.808203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.808222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.808241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x184200 00:24:26.937 [2024-11-20 12:51:59.808260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7ff000 len:0x10000 key:0x184400 00:24:26.937 [2024-11-20 12:51:59.808281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000112bf000 len:0x10000 key:0x184400 00:24:26.937 [2024-11-20 12:51:59.808301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x184400 00:24:26.937 [2024-11-20 12:51:59.808321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x184400 00:24:26.937 [2024-11-20 12:51:59.808342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013509000 len:0x10000 key:0x184400 00:24:26.937 [2024-11-20 12:51:59.808361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.937 [2024-11-20 12:51:59.808374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134e8000 len:0x10000 key:0x184400 00:24:26.937 [2024-11-20 12:51:59.808381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc9f000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc7e000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc5d000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d311000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2f0000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f519000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f53a000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f55b000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f57c000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f59d000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001108e000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001106d000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001104c000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001102b000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.808674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001100a000 len:0x10000 key:0x184400 00:24:26.938 [2024-11-20 12:51:59.808681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.811915] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256380 was disconnected and freed. reset controller. 00:24:26.938 [2024-11-20 12:51:59.811950] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.938 [2024-11-20 12:51:59.811993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183c00 00:24:26.938 [2024-11-20 12:51:59.812016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.812053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183400 00:24:26.938 [2024-11-20 12:51:59.812083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.812119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183c00 00:24:26.938 [2024-11-20 12:51:59.812140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.812176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183400 00:24:26.938 [2024-11-20 12:51:59.812198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.812233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183500 00:24:26.938 [2024-11-20 12:51:59.812255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.812291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183500 00:24:26.938 [2024-11-20 12:51:59.812313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.812348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183500 00:24:26.938 [2024-11-20 12:51:59.812371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.812406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183500 00:24:26.938 [2024-11-20 12:51:59.812429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.812464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183c00 00:24:26.938 [2024-11-20 12:51:59.812486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.812521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa8fb00 len:0x10000 key:0x183400 00:24:26.938 [2024-11-20 12:51:59.812543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.938 [2024-11-20 12:51:59.812579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183d00 00:24:26.939 [2024-11-20 12:51:59.812652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183d00 00:24:26.939 [2024-11-20 12:51:59.812671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aacfd00 len:0x10000 key:0x183400 00:24:26.939 [2024-11-20 12:51:59.812750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183500 00:24:26.939 [2024-11-20 12:51:59.812808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa6fa00 len:0x10000 key:0x183400 00:24:26.939 [2024-11-20 12:51:59.812847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183500 00:24:26.939 [2024-11-20 12:51:59.812965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.812977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.812990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.813010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183500 00:24:26.939 [2024-11-20 12:51:59.813029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.813048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.813068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.813087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.813107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.813127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183500 00:24:26.939 [2024-11-20 12:51:59.813146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x183400 00:24:26.939 [2024-11-20 12:51:59.813165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adf0000 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.813184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.813204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183c00 00:24:26.939 [2024-11-20 12:51:59.813224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183400 00:24:26.939 [2024-11-20 12:51:59.813243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012990000 len:0x10000 key:0x184400 00:24:26.939 [2024-11-20 12:51:59.813262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114cf000 len:0x10000 key:0x184400 00:24:26.939 [2024-11-20 12:51:59.813282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x184400 00:24:26.939 [2024-11-20 12:51:59.813302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.939 [2024-11-20 12:51:59.813315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013719000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000136f8000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce6d000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d521000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d500000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f939000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f95a000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f97b000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f99c000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9bd000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001129e000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001127d000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001125c000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001123b000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.813660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001121a000 len:0x10000 key:0x184400 00:24:26.940 [2024-11-20 12:51:59.813667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817300] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256140 was disconnected and freed. reset controller. 00:24:26.940 [2024-11-20 12:51:59.817313] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.940 [2024-11-20 12:51:59.817324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183800 00:24:26.940 [2024-11-20 12:51:59.817331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183100 00:24:26.940 [2024-11-20 12:51:59.817358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183100 00:24:26.940 [2024-11-20 12:51:59.817378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183100 00:24:26.940 [2024-11-20 12:51:59.817398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183800 00:24:26.940 [2024-11-20 12:51:59.817421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183800 00:24:26.940 [2024-11-20 12:51:59.817440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183800 00:24:26.940 [2024-11-20 12:51:59.817460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183800 00:24:26.940 [2024-11-20 12:51:59.817479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183100 00:24:26.940 [2024-11-20 12:51:59.817498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183100 00:24:26.940 [2024-11-20 12:51:59.817517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183100 00:24:26.940 [2024-11-20 12:51:59.817536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183800 00:24:26.940 [2024-11-20 12:51:59.817555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183100 00:24:26.940 [2024-11-20 12:51:59.817575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183100 00:24:26.940 [2024-11-20 12:51:59.817594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183800 00:24:26.940 [2024-11-20 12:51:59.817614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183800 00:24:26.940 [2024-11-20 12:51:59.817635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183800 00:24:26.940 [2024-11-20 12:51:59.817654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.940 [2024-11-20 12:51:59.817666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183100 00:24:26.940 [2024-11-20 12:51:59.817673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.817712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.817770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.817828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.817907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.817965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.817977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.818013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.818033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.818052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.818071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183800 00:24:26.941 [2024-11-20 12:51:59.818090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.818111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.818131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.818150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.818170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183100 00:24:26.941 [2024-11-20 12:51:59.818189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ba0000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000116df000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b529000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b508000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0bf000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d09e000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d07d000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d731000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.941 [2024-11-20 12:51:59.818403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d710000 len:0x10000 key:0x184400 00:24:26.941 [2024-11-20 12:51:59.818411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.818423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd59000 len:0x10000 key:0x184400 00:24:26.942 [2024-11-20 12:51:59.818430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.818443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd7a000 len:0x10000 key:0x184400 00:24:26.942 [2024-11-20 12:51:59.818450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.818463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd9b000 len:0x10000 key:0x184400 00:24:26.942 [2024-11-20 12:51:59.818470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.818482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdbc000 len:0x10000 key:0x184400 00:24:26.942 [2024-11-20 12:51:59.818490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.818502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fddd000 len:0x10000 key:0x184400 00:24:26.942 [2024-11-20 12:51:59.818510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.818522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114ae000 len:0x10000 key:0x184400 00:24:26.942 [2024-11-20 12:51:59.818529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.818542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001148d000 len:0x10000 key:0x184400 00:24:26.942 [2024-11-20 12:51:59.818550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.818563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001146c000 len:0x10000 key:0x184400 00:24:26.942 [2024-11-20 12:51:59.818570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.818583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001144b000 len:0x10000 key:0x184400 00:24:26.942 [2024-11-20 12:51:59.818590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.818603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001142a000 len:0x10000 key:0x184400 00:24:26.942 [2024-11-20 12:51:59.818610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.821611] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:24:26.942 [2024-11-20 12:51:59.821645] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.942 [2024-11-20 12:51:59.821677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x184000 00:24:26.942 [2024-11-20 12:51:59.821699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.821737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x184000 00:24:26.942 [2024-11-20 12:51:59.821759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.821795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x184000 00:24:26.942 [2024-11-20 12:51:59.821817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.821853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x184000 00:24:26.942 [2024-11-20 12:51:59.821876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.821911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x184000 00:24:26.942 [2024-11-20 12:51:59.821933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.821968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183a00 00:24:26.942 [2024-11-20 12:51:59.822000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183a00 00:24:26.942 [2024-11-20 12:51:59.822065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x183900 00:24:26.942 [2024-11-20 12:51:59.822101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x183900 00:24:26.942 [2024-11-20 12:51:59.822120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x183900 00:24:26.942 [2024-11-20 12:51:59.822140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x183900 00:24:26.942 [2024-11-20 12:51:59.822159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x183900 00:24:26.942 [2024-11-20 12:51:59.822179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x183900 00:24:26.942 [2024-11-20 12:51:59.822198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x184000 00:24:26.942 [2024-11-20 12:51:59.822218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x183900 00:24:26.942 [2024-11-20 12:51:59.822238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x183900 00:24:26.942 [2024-11-20 12:51:59.822257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x183a00 00:24:26.942 [2024-11-20 12:51:59.822277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x184000 00:24:26.942 [2024-11-20 12:51:59.822296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x184000 00:24:26.942 [2024-11-20 12:51:59.822318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x184000 00:24:26.942 [2024-11-20 12:51:59.822337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.942 [2024-11-20 12:51:59.822349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183a00 00:24:26.943 [2024-11-20 12:51:59.822356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183a00 00:24:26.943 [2024-11-20 12:51:59.822375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x184000 00:24:26.943 [2024-11-20 12:51:59.822394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x184000 00:24:26.943 [2024-11-20 12:51:59.822413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x183900 00:24:26.943 [2024-11-20 12:51:59.822433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x183900 00:24:26.943 [2024-11-20 12:51:59.822452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183a00 00:24:26.943 [2024-11-20 12:51:59.822472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x184000 00:24:26.943 [2024-11-20 12:51:59.822491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x184000 00:24:26.943 [2024-11-20 12:51:59.822511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183a00 00:24:26.943 [2024-11-20 12:51:59.822532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183a00 00:24:26.943 [2024-11-20 12:51:59.822552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x184000 00:24:26.943 [2024-11-20 12:51:59.822571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x184000 00:24:26.943 [2024-11-20 12:51:59.822591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x184000 00:24:26.943 [2024-11-20 12:51:59.822610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183a00 00:24:26.943 [2024-11-20 12:51:59.822629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x184000 00:24:26.943 [2024-11-20 12:51:59.822648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x183900 00:24:26.943 [2024-11-20 12:51:59.822668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x183900 00:24:26.943 [2024-11-20 12:51:59.822687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x183900 00:24:26.943 [2024-11-20 12:51:59.822707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x183900 00:24:26.943 [2024-11-20 12:51:59.822726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126db000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9c5000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9a4000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d983000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d962000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d941000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d920000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.822964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010179000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.822973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.823002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001019a000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.823010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.823023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101bb000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.823031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.823043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101dc000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.823051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.823063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101fd000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.823071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.943 [2024-11-20 12:51:59.823083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000116be000 len:0x10000 key:0x184400 00:24:26.943 [2024-11-20 12:51:59.823091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.944 [2024-11-20 12:51:59.823103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001169d000 len:0x10000 key:0x184400 00:24:26.944 [2024-11-20 12:51:59.823110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.944 [2024-11-20 12:51:59.823123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001167c000 len:0x10000 key:0x184400 00:24:26.944 [2024-11-20 12:51:59.823131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.944 [2024-11-20 12:51:59.823143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001165b000 len:0x10000 key:0x184400 00:24:26.944 [2024-11-20 12:51:59.823151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.944 [2024-11-20 12:51:59.823163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001163a000 len:0x10000 key:0x184400 00:24:26.944 [2024-11-20 12:51:59.823170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.944 [2024-11-20 12:51:59.823183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011619000 len:0x10000 key:0x184400 00:24:26.944 [2024-11-20 12:51:59.823190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.944 [2024-11-20 12:51:59.823203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000115f8000 len:0x10000 key:0x184400 00:24:26.944 [2024-11-20 12:51:59.823210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.944 [2024-11-20 12:51:59.823224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000115d7000 len:0x10000 key:0x184400 00:24:26.944 [2024-11-20 12:51:59.823232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:83c97000 sqhd:5310 p:0 m:0 dnr:0 00:24:26.944 [2024-11-20 12:51:59.844976] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:24:26.944 [2024-11-20 12:51:59.845002] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 [2024-11-20 12:51:59.845065] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 [2024-11-20 12:51:59.845078] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 [2024-11-20 12:51:59.845089] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 [2024-11-20 12:51:59.845099] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 [2024-11-20 12:51:59.845109] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 [2024-11-20 12:51:59.845120] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 [2024-11-20 12:51:59.845130] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 [2024-11-20 12:51:59.845140] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 [2024-11-20 12:51:59.845150] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 [2024-11-20 12:51:59.845161] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.944 task offset: 66304 on job bdev=Nvme1n1 fails 00:24:26.944 00:24:26.944 Latency(us) 00:24:26.944 [2024-11-20T11:52:00.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme1n1 ended in about 1.99 seconds with error 00:24:26.944 Verification LBA range: start 0x0 length 0x400 00:24:26.944 Nvme1n1 : 1.99 248.15 15.51 32.21 0.00 227561.31 54831.79 1097509.55 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme2n1 ended in about 1.99 seconds with error 00:24:26.944 Verification LBA range: start 0x0 length 0x400 00:24:26.944 Nvme2n1 : 1.99 252.59 15.79 32.14 0.00 223021.59 13107.20 1097509.55 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme3n1 ended in about 2.00 seconds with error 00:24:26.944 Verification LBA range: start 0x0 length 0x400 00:24:26.944 Nvme3n1 : 2.00 251.00 15.69 32.06 0.00 223513.72 58108.59 1097509.55 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme4n1 ended in about 2.01 seconds with error 00:24:26.944 Verification LBA range: start 0x0 length 0x400 00:24:26.944 Nvme4n1 : 2.01 249.67 15.60 31.89 0.00 223144.86 58982.40 1097509.55 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme5n1 ended in about 2.01 seconds with error 00:24:26.944 Verification LBA range: start 0x0 length 0x400 00:24:26.944 Nvme5n1 : 2.01 249.07 15.57 31.82 0.00 223356.74 59856.21 1097509.55 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme6n1 ended in about 2.02 seconds with error 00:24:26.944 Verification LBA range: start 0x0 length 0x400 00:24:26.944 Nvme6n1 : 2.02 248.41 15.53 31.73 0.00 223028.87 59856.21 1097509.55 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme7n1 ended in about 2.02 seconds with error 00:24:26.944 Verification LBA range: start 0x0 length 0x400 00:24:26.944 Nvme7n1 : 2.02 247.84 15.49 31.66 0.00 222642.22 60293.12 1090519.04 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme8n1 ended in about 2.03 seconds with error 00:24:26.944 Verification LBA range: start 0x0 length 0x400 00:24:26.944 Nvme8n1 : 2.03 247.23 15.45 31.58 0.00 222179.03 60293.12 1090519.04 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme9n1 ended in about 2.03 seconds with error 00:24:26.944 Verification LBA range: start 0x0 length 0x400 00:24:26.944 Nvme9n1 : 2.03 246.63 15.41 31.51 0.00 221929.25 58982.40 1090519.04 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.944 [2024-11-20T11:52:00.052Z] Job: Nvme10n1 ended in about 2.04 seconds with error 00:24:26.944 Verification LBA range: start 0x0 length 0x400 00:24:26.944 Nvme10n1 : 2.04 142.93 8.93 31.43 0.00 352476.91 55268.69 1090519.04 00:24:26.944 [2024-11-20T11:52:00.052Z] =================================================================================================================== 00:24:26.944 [2024-11-20T11:52:00.052Z] Total : 2383.52 148.97 318.04 0.00 231803.13 13107.20 1097509.55 00:24:26.944 [2024-11-20 12:51:59.870702] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:26.944 [2024-11-20 12:51:59.870725] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.944 [2024-11-20 12:51:59.870738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:26.944 [2024-11-20 12:51:59.870748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:26.944 [2024-11-20 12:51:59.870757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:26.944 [2024-11-20 12:51:59.870880] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:26.944 [2024-11-20 12:51:59.870892] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:26.944 [2024-11-20 12:51:59.870901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:26.944 [2024-11-20 12:51:59.870909] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:26.944 [2024-11-20 12:51:59.870918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:26.944 [2024-11-20 12:51:59.870926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:26.944 [2024-11-20 12:51:59.888420] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.944 [2024-11-20 12:51:59.888472] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.944 [2024-11-20 12:51:59.888493] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc080 00:24:26.944 [2024-11-20 12:51:59.888715] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.944 [2024-11-20 12:51:59.888740] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.944 [2024-11-20 12:51:59.888756] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d07c0 00:24:26.944 [2024-11-20 12:51:59.889010] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.944 [2024-11-20 12:51:59.889040] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.944 [2024-11-20 12:51:59.889057] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:24:26.944 [2024-11-20 12:51:59.889339] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.944 [2024-11-20 12:51:59.889347] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.944 [2024-11-20 12:51:59.889352] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:24:26.944 [2024-11-20 12:51:59.889620] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.944 [2024-11-20 12:51:59.889628] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.944 [2024-11-20 12:51:59.889634] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b100 00:24:26.944 [2024-11-20 12:51:59.889880] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.945 [2024-11-20 12:51:59.889889] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.945 [2024-11-20 12:51:59.889895] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8ac0 00:24:26.945 [2024-11-20 12:51:59.890182] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.945 [2024-11-20 12:51:59.890191] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.945 [2024-11-20 12:51:59.890196] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:24:26.945 [2024-11-20 12:51:59.890458] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.945 [2024-11-20 12:51:59.890466] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.945 [2024-11-20 12:51:59.890472] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:24:26.945 [2024-11-20 12:51:59.890720] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.945 [2024-11-20 12:51:59.890729] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.945 [2024-11-20 12:51:59.890735] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192b9d80 00:24:26.945 [2024-11-20 12:51:59.890875] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.945 [2024-11-20 12:51:59.890883] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.945 [2024-11-20 12:51:59.890889] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c3d00 00:24:26.945 12:52:00 -- target/shutdown.sh@141 -- # kill -9 613032 00:24:26.945 12:52:00 -- target/shutdown.sh@143 -- # stoptarget 00:24:26.945 12:52:00 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:27.206 12:52:00 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:27.206 12:52:00 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:27.206 12:52:00 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:27.206 12:52:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:27.206 12:52:00 -- nvmf/common.sh@116 -- # sync 00:24:27.206 12:52:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:27.206 12:52:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:27.206 12:52:00 -- nvmf/common.sh@119 -- # set +e 00:24:27.206 12:52:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:27.206 12:52:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:27.206 rmmod nvme_rdma 00:24:27.206 rmmod nvme_fabrics 00:24:27.206 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 613032 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:24:27.206 12:52:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:27.206 12:52:00 -- nvmf/common.sh@123 -- # set -e 00:24:27.206 12:52:00 -- nvmf/common.sh@124 -- # return 0 00:24:27.206 12:52:00 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:27.206 12:52:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:27.206 12:52:00 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:27.206 00:24:27.206 real 0m5.095s 00:24:27.206 user 0m17.449s 00:24:27.206 sys 0m1.028s 00:24:27.206 12:52:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:27.206 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.206 ************************************ 00:24:27.206 END TEST nvmf_shutdown_tc3 00:24:27.206 ************************************ 00:24:27.206 12:52:00 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:27.206 00:24:27.206 real 0m25.181s 00:24:27.206 user 1m12.871s 00:24:27.206 sys 0m8.490s 00:24:27.206 12:52:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:27.206 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.206 ************************************ 00:24:27.206 END TEST nvmf_shutdown 00:24:27.206 ************************************ 00:24:27.206 12:52:00 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:27.206 12:52:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:27.206 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.206 12:52:00 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:27.206 12:52:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:27.206 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.206 12:52:00 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:27.206 12:52:00 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:27.206 12:52:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:27.206 12:52:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:27.207 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.207 ************************************ 00:24:27.207 START TEST nvmf_multicontroller 00:24:27.207 ************************************ 00:24:27.207 12:52:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:27.207 * Looking for test storage... 00:24:27.468 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:27.468 12:52:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:27.468 12:52:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:27.468 12:52:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:27.468 12:52:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:27.468 12:52:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:27.468 12:52:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:27.468 12:52:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:27.468 12:52:00 -- scripts/common.sh@335 -- # IFS=.-: 00:24:27.468 12:52:00 -- scripts/common.sh@335 -- # read -ra ver1 00:24:27.468 12:52:00 -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.468 12:52:00 -- scripts/common.sh@336 -- # read -ra ver2 00:24:27.468 12:52:00 -- scripts/common.sh@337 -- # local 'op=<' 00:24:27.468 12:52:00 -- scripts/common.sh@339 -- # ver1_l=2 00:24:27.468 12:52:00 -- scripts/common.sh@340 -- # ver2_l=1 00:24:27.468 12:52:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:27.468 12:52:00 -- scripts/common.sh@343 -- # case "$op" in 00:24:27.468 12:52:00 -- scripts/common.sh@344 -- # : 1 00:24:27.468 12:52:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:27.468 12:52:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.468 12:52:00 -- scripts/common.sh@364 -- # decimal 1 00:24:27.468 12:52:00 -- scripts/common.sh@352 -- # local d=1 00:24:27.468 12:52:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.468 12:52:00 -- scripts/common.sh@354 -- # echo 1 00:24:27.468 12:52:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:27.468 12:52:00 -- scripts/common.sh@365 -- # decimal 2 00:24:27.468 12:52:00 -- scripts/common.sh@352 -- # local d=2 00:24:27.468 12:52:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.468 12:52:00 -- scripts/common.sh@354 -- # echo 2 00:24:27.468 12:52:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:27.468 12:52:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:27.468 12:52:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:27.468 12:52:00 -- scripts/common.sh@367 -- # return 0 00:24:27.468 12:52:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.468 12:52:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:27.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.468 --rc genhtml_branch_coverage=1 00:24:27.468 --rc genhtml_function_coverage=1 00:24:27.468 --rc genhtml_legend=1 00:24:27.468 --rc geninfo_all_blocks=1 00:24:27.468 --rc geninfo_unexecuted_blocks=1 00:24:27.468 00:24:27.468 ' 00:24:27.468 12:52:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:27.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.468 --rc genhtml_branch_coverage=1 00:24:27.468 --rc genhtml_function_coverage=1 00:24:27.468 --rc genhtml_legend=1 00:24:27.468 --rc geninfo_all_blocks=1 00:24:27.468 --rc geninfo_unexecuted_blocks=1 00:24:27.468 00:24:27.468 ' 00:24:27.468 12:52:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:27.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.468 --rc genhtml_branch_coverage=1 00:24:27.468 --rc genhtml_function_coverage=1 00:24:27.468 --rc genhtml_legend=1 00:24:27.468 --rc geninfo_all_blocks=1 00:24:27.468 --rc geninfo_unexecuted_blocks=1 00:24:27.468 00:24:27.468 ' 00:24:27.468 12:52:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:27.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.468 --rc genhtml_branch_coverage=1 00:24:27.468 --rc genhtml_function_coverage=1 00:24:27.468 --rc genhtml_legend=1 00:24:27.468 --rc geninfo_all_blocks=1 00:24:27.468 --rc geninfo_unexecuted_blocks=1 00:24:27.468 00:24:27.468 ' 00:24:27.468 12:52:00 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.468 12:52:00 -- nvmf/common.sh@7 -- # uname -s 00:24:27.468 12:52:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.468 12:52:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.468 12:52:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.468 12:52:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.468 12:52:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.468 12:52:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.468 12:52:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.468 12:52:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.468 12:52:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.468 12:52:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.468 12:52:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:27.468 12:52:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:27.468 12:52:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.468 12:52:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.468 12:52:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.468 12:52:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:27.468 12:52:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.468 12:52:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.468 12:52:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.468 12:52:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.468 12:52:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.468 12:52:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.468 12:52:00 -- paths/export.sh@5 -- # export PATH 00:24:27.468 12:52:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.468 12:52:00 -- nvmf/common.sh@46 -- # : 0 00:24:27.468 12:52:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:27.468 12:52:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:27.468 12:52:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:27.468 12:52:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.468 12:52:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.468 12:52:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:27.468 12:52:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:27.468 12:52:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:27.468 12:52:00 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:27.468 12:52:00 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:27.468 12:52:00 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:27.468 12:52:00 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:27.469 12:52:00 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.469 12:52:00 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:24:27.469 12:52:00 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:27.469 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:27.469 12:52:00 -- host/multicontroller.sh@20 -- # exit 0 00:24:27.469 00:24:27.469 real 0m0.229s 00:24:27.469 user 0m0.141s 00:24:27.469 sys 0m0.102s 00:24:27.469 12:52:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:27.469 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.469 ************************************ 00:24:27.469 END TEST nvmf_multicontroller 00:24:27.469 ************************************ 00:24:27.469 12:52:00 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:27.469 12:52:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:27.469 12:52:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:27.469 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.469 ************************************ 00:24:27.469 START TEST nvmf_aer 00:24:27.469 ************************************ 00:24:27.469 12:52:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:27.729 * Looking for test storage... 00:24:27.729 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:27.729 12:52:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:27.729 12:52:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:27.729 12:52:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:27.729 12:52:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:27.729 12:52:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:27.729 12:52:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:27.729 12:52:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:27.729 12:52:00 -- scripts/common.sh@335 -- # IFS=.-: 00:24:27.729 12:52:00 -- scripts/common.sh@335 -- # read -ra ver1 00:24:27.729 12:52:00 -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.729 12:52:00 -- scripts/common.sh@336 -- # read -ra ver2 00:24:27.729 12:52:00 -- scripts/common.sh@337 -- # local 'op=<' 00:24:27.729 12:52:00 -- scripts/common.sh@339 -- # ver1_l=2 00:24:27.729 12:52:00 -- scripts/common.sh@340 -- # ver2_l=1 00:24:27.729 12:52:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:27.729 12:52:00 -- scripts/common.sh@343 -- # case "$op" in 00:24:27.729 12:52:00 -- scripts/common.sh@344 -- # : 1 00:24:27.729 12:52:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:27.729 12:52:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.729 12:52:00 -- scripts/common.sh@364 -- # decimal 1 00:24:27.729 12:52:00 -- scripts/common.sh@352 -- # local d=1 00:24:27.729 12:52:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.729 12:52:00 -- scripts/common.sh@354 -- # echo 1 00:24:27.729 12:52:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:27.729 12:52:00 -- scripts/common.sh@365 -- # decimal 2 00:24:27.729 12:52:00 -- scripts/common.sh@352 -- # local d=2 00:24:27.729 12:52:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.729 12:52:00 -- scripts/common.sh@354 -- # echo 2 00:24:27.729 12:52:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:27.729 12:52:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:27.729 12:52:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:27.729 12:52:00 -- scripts/common.sh@367 -- # return 0 00:24:27.730 12:52:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.730 12:52:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:27.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.730 --rc genhtml_branch_coverage=1 00:24:27.730 --rc genhtml_function_coverage=1 00:24:27.730 --rc genhtml_legend=1 00:24:27.730 --rc geninfo_all_blocks=1 00:24:27.730 --rc geninfo_unexecuted_blocks=1 00:24:27.730 00:24:27.730 ' 00:24:27.730 12:52:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:27.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.730 --rc genhtml_branch_coverage=1 00:24:27.730 --rc genhtml_function_coverage=1 00:24:27.730 --rc genhtml_legend=1 00:24:27.730 --rc geninfo_all_blocks=1 00:24:27.730 --rc geninfo_unexecuted_blocks=1 00:24:27.730 00:24:27.730 ' 00:24:27.730 12:52:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:27.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.730 --rc genhtml_branch_coverage=1 00:24:27.730 --rc genhtml_function_coverage=1 00:24:27.730 --rc genhtml_legend=1 00:24:27.730 --rc geninfo_all_blocks=1 00:24:27.730 --rc geninfo_unexecuted_blocks=1 00:24:27.730 00:24:27.730 ' 00:24:27.730 12:52:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:27.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.730 --rc genhtml_branch_coverage=1 00:24:27.730 --rc genhtml_function_coverage=1 00:24:27.730 --rc genhtml_legend=1 00:24:27.730 --rc geninfo_all_blocks=1 00:24:27.730 --rc geninfo_unexecuted_blocks=1 00:24:27.730 00:24:27.730 ' 00:24:27.730 12:52:00 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.730 12:52:00 -- nvmf/common.sh@7 -- # uname -s 00:24:27.730 12:52:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.730 12:52:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.730 12:52:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.730 12:52:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.730 12:52:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.730 12:52:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.730 12:52:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.730 12:52:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.730 12:52:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.730 12:52:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.730 12:52:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:27.730 12:52:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:27.730 12:52:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.730 12:52:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.730 12:52:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.730 12:52:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:27.730 12:52:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.730 12:52:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.730 12:52:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.730 12:52:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.730 12:52:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.730 12:52:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.730 12:52:00 -- paths/export.sh@5 -- # export PATH 00:24:27.730 12:52:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.730 12:52:00 -- nvmf/common.sh@46 -- # : 0 00:24:27.730 12:52:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:27.730 12:52:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:27.730 12:52:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:27.730 12:52:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.730 12:52:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.730 12:52:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:27.730 12:52:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:27.730 12:52:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:27.730 12:52:00 -- host/aer.sh@11 -- # nvmftestinit 00:24:27.730 12:52:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:27.730 12:52:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.730 12:52:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:27.730 12:52:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:27.730 12:52:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:27.730 12:52:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.730 12:52:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.730 12:52:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.730 12:52:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:27.730 12:52:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:27.730 12:52:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:27.730 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:35.876 12:52:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:35.876 12:52:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:35.876 12:52:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:35.876 12:52:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:35.876 12:52:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:35.876 12:52:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:35.876 12:52:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:35.876 12:52:07 -- nvmf/common.sh@294 -- # net_devs=() 00:24:35.876 12:52:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:35.876 12:52:07 -- nvmf/common.sh@295 -- # e810=() 00:24:35.877 12:52:07 -- nvmf/common.sh@295 -- # local -ga e810 00:24:35.877 12:52:07 -- nvmf/common.sh@296 -- # x722=() 00:24:35.877 12:52:07 -- nvmf/common.sh@296 -- # local -ga x722 00:24:35.877 12:52:07 -- nvmf/common.sh@297 -- # mlx=() 00:24:35.877 12:52:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:35.877 12:52:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.877 12:52:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:35.877 12:52:07 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:35.877 12:52:07 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:35.877 12:52:07 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:35.877 12:52:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:35.877 12:52:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:35.877 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:35.877 12:52:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:35.877 12:52:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:35.877 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:35.877 12:52:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:35.877 12:52:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:35.877 12:52:07 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.877 12:52:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:35.877 12:52:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.877 12:52:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:35.877 Found net devices under 0000:98:00.0: mlx_0_0 00:24:35.877 12:52:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.877 12:52:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.877 12:52:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:35.877 12:52:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.877 12:52:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:35.877 Found net devices under 0000:98:00.1: mlx_0_1 00:24:35.877 12:52:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.877 12:52:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:35.877 12:52:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:35.877 12:52:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:35.877 12:52:07 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:35.877 12:52:07 -- nvmf/common.sh@57 -- # uname 00:24:35.877 12:52:07 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:35.877 12:52:07 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:35.877 12:52:07 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:35.877 12:52:07 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:35.877 12:52:07 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:35.877 12:52:07 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:35.877 12:52:07 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:35.877 12:52:07 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:35.877 12:52:07 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:35.877 12:52:07 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:35.877 12:52:07 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:35.877 12:52:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:35.877 12:52:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:35.877 12:52:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:35.877 12:52:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:35.877 12:52:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:35.877 12:52:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:35.877 12:52:07 -- nvmf/common.sh@104 -- # continue 2 00:24:35.877 12:52:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:35.877 12:52:07 -- nvmf/common.sh@104 -- # continue 2 00:24:35.877 12:52:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:35.877 12:52:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:35.877 12:52:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:35.877 12:52:07 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:35.877 12:52:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:35.877 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:35.877 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:24:35.877 altname enp152s0f0np0 00:24:35.877 altname ens817f0np0 00:24:35.877 inet 192.168.100.8/24 scope global mlx_0_0 00:24:35.877 valid_lft forever preferred_lft forever 00:24:35.877 12:52:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:35.877 12:52:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:35.877 12:52:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:35.877 12:52:07 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:35.877 12:52:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:35.877 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:35.877 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:24:35.877 altname enp152s0f1np1 00:24:35.877 altname ens817f1np1 00:24:35.877 inet 192.168.100.9/24 scope global mlx_0_1 00:24:35.877 valid_lft forever preferred_lft forever 00:24:35.877 12:52:07 -- nvmf/common.sh@410 -- # return 0 00:24:35.877 12:52:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:35.877 12:52:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:35.877 12:52:07 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:35.877 12:52:07 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:35.877 12:52:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:35.877 12:52:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:35.877 12:52:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:35.877 12:52:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:35.877 12:52:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:35.877 12:52:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:35.877 12:52:07 -- nvmf/common.sh@104 -- # continue 2 00:24:35.877 12:52:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.877 12:52:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:35.877 12:52:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:35.877 12:52:07 -- nvmf/common.sh@104 -- # continue 2 00:24:35.877 12:52:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:35.877 12:52:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:35.877 12:52:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:35.877 12:52:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:35.877 12:52:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:35.877 12:52:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:35.877 12:52:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:35.878 12:52:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:35.878 12:52:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:35.878 192.168.100.9' 00:24:35.878 12:52:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:35.878 192.168.100.9' 00:24:35.878 12:52:07 -- nvmf/common.sh@445 -- # head -n 1 00:24:35.878 12:52:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:35.878 12:52:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:35.878 192.168.100.9' 00:24:35.878 12:52:07 -- nvmf/common.sh@446 -- # tail -n +2 00:24:35.878 12:52:07 -- nvmf/common.sh@446 -- # head -n 1 00:24:35.878 12:52:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:35.878 12:52:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:35.878 12:52:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:35.878 12:52:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:35.878 12:52:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:35.878 12:52:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:35.878 12:52:07 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:35.878 12:52:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:35.878 12:52:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.878 12:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:35.878 12:52:07 -- nvmf/common.sh@469 -- # nvmfpid=617544 00:24:35.878 12:52:07 -- nvmf/common.sh@470 -- # waitforlisten 617544 00:24:35.878 12:52:07 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:35.878 12:52:07 -- common/autotest_common.sh@829 -- # '[' -z 617544 ']' 00:24:35.878 12:52:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.878 12:52:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.878 12:52:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.878 12:52:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.878 12:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:35.878 [2024-11-20 12:52:07.906648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:35.878 [2024-11-20 12:52:07.906709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.878 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.878 [2024-11-20 12:52:07.972633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:35.878 [2024-11-20 12:52:08.044705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:35.878 [2024-11-20 12:52:08.044841] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.878 [2024-11-20 12:52:08.044851] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.878 [2024-11-20 12:52:08.044860] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.878 [2024-11-20 12:52:08.045036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.878 [2024-11-20 12:52:08.045310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.878 [2024-11-20 12:52:08.045467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.878 [2024-11-20 12:52:08.045467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.878 12:52:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.878 12:52:08 -- common/autotest_common.sh@862 -- # return 0 00:24:35.878 12:52:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:35.878 12:52:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:35.878 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.878 12:52:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.878 12:52:08 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:35.878 12:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.878 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.878 [2024-11-20 12:52:08.779950] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22da7f0/0x22dece0) succeed. 00:24:35.878 [2024-11-20 12:52:08.794783] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22dbde0/0x2320380) succeed. 00:24:35.878 12:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.878 12:52:08 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:35.878 12:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.878 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.878 Malloc0 00:24:35.878 12:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.878 12:52:08 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:35.878 12:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.878 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.878 12:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.878 12:52:08 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.878 12:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.878 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.878 12:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.878 12:52:08 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:35.878 12:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.878 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.878 [2024-11-20 12:52:08.970738] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:35.878 12:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.878 12:52:08 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:35.878 12:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.878 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:36.140 [2024-11-20 12:52:08.982320] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:36.140 [ 00:24:36.140 { 00:24:36.140 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:36.140 "subtype": "Discovery", 00:24:36.140 "listen_addresses": [], 00:24:36.140 "allow_any_host": true, 00:24:36.140 "hosts": [] 00:24:36.140 }, 00:24:36.140 { 00:24:36.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.140 "subtype": "NVMe", 00:24:36.140 "listen_addresses": [ 00:24:36.140 { 00:24:36.140 "transport": "RDMA", 00:24:36.140 "trtype": "RDMA", 00:24:36.140 "adrfam": "IPv4", 00:24:36.140 "traddr": "192.168.100.8", 00:24:36.140 "trsvcid": "4420" 00:24:36.140 } 00:24:36.140 ], 00:24:36.140 "allow_any_host": true, 00:24:36.140 "hosts": [], 00:24:36.140 "serial_number": "SPDK00000000000001", 00:24:36.140 "model_number": "SPDK bdev Controller", 00:24:36.140 "max_namespaces": 2, 00:24:36.140 "min_cntlid": 1, 00:24:36.140 "max_cntlid": 65519, 00:24:36.140 "namespaces": [ 00:24:36.140 { 00:24:36.140 "nsid": 1, 00:24:36.140 "bdev_name": "Malloc0", 00:24:36.140 "name": "Malloc0", 00:24:36.140 "nguid": "1C96F993840E4E4FBFBA8CF272762794", 00:24:36.140 "uuid": "1c96f993-840e-4e4f-bfba-8cf272762794" 00:24:36.140 } 00:24:36.140 ] 00:24:36.140 } 00:24:36.140 ] 00:24:36.140 12:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.140 12:52:08 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:36.140 12:52:08 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:36.140 12:52:08 -- host/aer.sh@33 -- # aerpid=617796 00:24:36.140 12:52:08 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:36.140 12:52:08 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:36.140 12:52:08 -- common/autotest_common.sh@1254 -- # local i=0 00:24:36.140 12:52:08 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:36.140 12:52:08 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:24:36.140 12:52:08 -- common/autotest_common.sh@1257 -- # i=1 00:24:36.140 12:52:08 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:36.140 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.140 12:52:09 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:36.140 12:52:09 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:24:36.140 12:52:09 -- common/autotest_common.sh@1257 -- # i=2 00:24:36.140 12:52:09 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:36.140 12:52:09 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:36.140 12:52:09 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:36.140 12:52:09 -- common/autotest_common.sh@1265 -- # return 0 00:24:36.140 12:52:09 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:36.140 12:52:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.140 12:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.140 Malloc1 00:24:36.140 12:52:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.140 12:52:09 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:36.140 12:52:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.140 12:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.402 12:52:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.402 12:52:09 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:36.402 12:52:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.402 12:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.402 [ 00:24:36.402 { 00:24:36.402 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:36.402 "subtype": "Discovery", 00:24:36.402 "listen_addresses": [], 00:24:36.402 "allow_any_host": true, 00:24:36.402 "hosts": [] 00:24:36.402 }, 00:24:36.402 { 00:24:36.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.402 "subtype": "NVMe", 00:24:36.403 "listen_addresses": [ 00:24:36.403 { 00:24:36.403 "transport": "RDMA", 00:24:36.403 "trtype": "RDMA", 00:24:36.403 "adrfam": "IPv4", 00:24:36.403 "traddr": "192.168.100.8", 00:24:36.403 "trsvcid": "4420" 00:24:36.403 } 00:24:36.403 ], 00:24:36.403 "allow_any_host": true, 00:24:36.403 "hosts": [], 00:24:36.403 "serial_number": "SPDK00000000000001", 00:24:36.403 "model_number": "SPDK bdev Controller", 00:24:36.403 "max_namespaces": 2, 00:24:36.403 "min_cntlid": 1, 00:24:36.403 "max_cntlid": 65519, 00:24:36.403 "namespaces": [ 00:24:36.403 { 00:24:36.403 "nsid": 1, 00:24:36.403 "bdev_name": "Malloc0", 00:24:36.403 "name": "Malloc0", 00:24:36.403 "nguid": "1C96F993840E4E4FBFBA8CF272762794", 00:24:36.403 "uuid": "1c96f993-840e-4e4f-bfba-8cf272762794" 00:24:36.403 }, 00:24:36.403 { 00:24:36.403 "nsid": 2, 00:24:36.403 "bdev_name": "Malloc1", 00:24:36.403 "name": "Malloc1", 00:24:36.403 "nguid": "03C8B90ABDFF4B69B1D4ABF608CE56A9", 00:24:36.403 "uuid": "03c8b90a-bdff-4b69-b1d4-abf608ce56a9" 00:24:36.403 } 00:24:36.403 ] 00:24:36.403 } 00:24:36.403 ] 00:24:36.403 12:52:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.403 12:52:09 -- host/aer.sh@43 -- # wait 617796 00:24:36.403 Asynchronous Event Request test 00:24:36.403 Attaching to 192.168.100.8 00:24:36.403 Attached to 192.168.100.8 00:24:36.403 Registering asynchronous event callbacks... 00:24:36.403 Starting namespace attribute notice tests for all controllers... 00:24:36.403 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:36.403 aer_cb - Changed Namespace 00:24:36.403 Cleaning up... 00:24:36.403 12:52:09 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:36.403 12:52:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.403 12:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.403 12:52:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.403 12:52:09 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:36.403 12:52:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.403 12:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.403 12:52:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.403 12:52:09 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:36.403 12:52:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.403 12:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.403 12:52:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.403 12:52:09 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:36.403 12:52:09 -- host/aer.sh@51 -- # nvmftestfini 00:24:36.403 12:52:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:36.403 12:52:09 -- nvmf/common.sh@116 -- # sync 00:24:36.403 12:52:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:36.403 12:52:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:36.403 12:52:09 -- nvmf/common.sh@119 -- # set +e 00:24:36.403 12:52:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:36.403 12:52:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:36.403 rmmod nvme_rdma 00:24:36.403 rmmod nvme_fabrics 00:24:36.403 12:52:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:36.403 12:52:09 -- nvmf/common.sh@123 -- # set -e 00:24:36.403 12:52:09 -- nvmf/common.sh@124 -- # return 0 00:24:36.403 12:52:09 -- nvmf/common.sh@477 -- # '[' -n 617544 ']' 00:24:36.403 12:52:09 -- nvmf/common.sh@478 -- # killprocess 617544 00:24:36.403 12:52:09 -- common/autotest_common.sh@936 -- # '[' -z 617544 ']' 00:24:36.403 12:52:09 -- common/autotest_common.sh@940 -- # kill -0 617544 00:24:36.403 12:52:09 -- common/autotest_common.sh@941 -- # uname 00:24:36.403 12:52:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:36.403 12:52:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 617544 00:24:36.403 12:52:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:36.403 12:52:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:36.403 12:52:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 617544' 00:24:36.403 killing process with pid 617544 00:24:36.403 12:52:09 -- common/autotest_common.sh@955 -- # kill 617544 00:24:36.403 [2024-11-20 12:52:09.468681] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:36.403 12:52:09 -- common/autotest_common.sh@960 -- # wait 617544 00:24:36.665 12:52:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:36.665 12:52:09 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:36.665 00:24:36.665 real 0m9.195s 00:24:36.665 user 0m8.739s 00:24:36.665 sys 0m5.718s 00:24:36.665 12:52:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:36.665 12:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.665 ************************************ 00:24:36.665 END TEST nvmf_aer 00:24:36.665 ************************************ 00:24:36.665 12:52:09 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:36.665 12:52:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:36.665 12:52:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:36.665 12:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.665 ************************************ 00:24:36.665 START TEST nvmf_async_init 00:24:36.665 ************************************ 00:24:36.665 12:52:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:36.927 * Looking for test storage... 00:24:36.927 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:36.927 12:52:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:36.927 12:52:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:36.927 12:52:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:36.927 12:52:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:36.927 12:52:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:36.927 12:52:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:36.927 12:52:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:36.927 12:52:09 -- scripts/common.sh@335 -- # IFS=.-: 00:24:36.927 12:52:09 -- scripts/common.sh@335 -- # read -ra ver1 00:24:36.927 12:52:09 -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.927 12:52:09 -- scripts/common.sh@336 -- # read -ra ver2 00:24:36.927 12:52:09 -- scripts/common.sh@337 -- # local 'op=<' 00:24:36.927 12:52:09 -- scripts/common.sh@339 -- # ver1_l=2 00:24:36.927 12:52:09 -- scripts/common.sh@340 -- # ver2_l=1 00:24:36.927 12:52:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:36.927 12:52:09 -- scripts/common.sh@343 -- # case "$op" in 00:24:36.927 12:52:09 -- scripts/common.sh@344 -- # : 1 00:24:36.927 12:52:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:36.927 12:52:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.927 12:52:09 -- scripts/common.sh@364 -- # decimal 1 00:24:36.927 12:52:09 -- scripts/common.sh@352 -- # local d=1 00:24:36.927 12:52:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.927 12:52:09 -- scripts/common.sh@354 -- # echo 1 00:24:36.927 12:52:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:36.927 12:52:09 -- scripts/common.sh@365 -- # decimal 2 00:24:36.927 12:52:09 -- scripts/common.sh@352 -- # local d=2 00:24:36.927 12:52:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.927 12:52:09 -- scripts/common.sh@354 -- # echo 2 00:24:36.927 12:52:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:36.927 12:52:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:36.927 12:52:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:36.927 12:52:09 -- scripts/common.sh@367 -- # return 0 00:24:36.927 12:52:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.927 12:52:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:36.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.927 --rc genhtml_branch_coverage=1 00:24:36.927 --rc genhtml_function_coverage=1 00:24:36.927 --rc genhtml_legend=1 00:24:36.927 --rc geninfo_all_blocks=1 00:24:36.927 --rc geninfo_unexecuted_blocks=1 00:24:36.927 00:24:36.927 ' 00:24:36.927 12:52:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:36.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.927 --rc genhtml_branch_coverage=1 00:24:36.927 --rc genhtml_function_coverage=1 00:24:36.927 --rc genhtml_legend=1 00:24:36.927 --rc geninfo_all_blocks=1 00:24:36.927 --rc geninfo_unexecuted_blocks=1 00:24:36.927 00:24:36.927 ' 00:24:36.927 12:52:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:36.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.927 --rc genhtml_branch_coverage=1 00:24:36.927 --rc genhtml_function_coverage=1 00:24:36.927 --rc genhtml_legend=1 00:24:36.927 --rc geninfo_all_blocks=1 00:24:36.927 --rc geninfo_unexecuted_blocks=1 00:24:36.927 00:24:36.927 ' 00:24:36.927 12:52:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:36.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.927 --rc genhtml_branch_coverage=1 00:24:36.927 --rc genhtml_function_coverage=1 00:24:36.927 --rc genhtml_legend=1 00:24:36.927 --rc geninfo_all_blocks=1 00:24:36.927 --rc geninfo_unexecuted_blocks=1 00:24:36.927 00:24:36.927 ' 00:24:36.927 12:52:09 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.927 12:52:09 -- nvmf/common.sh@7 -- # uname -s 00:24:36.927 12:52:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.927 12:52:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.927 12:52:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.927 12:52:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.927 12:52:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.927 12:52:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.927 12:52:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.927 12:52:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.927 12:52:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.927 12:52:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.927 12:52:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:36.927 12:52:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:36.927 12:52:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.927 12:52:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.927 12:52:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.927 12:52:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:36.927 12:52:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.927 12:52:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.927 12:52:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.928 12:52:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.928 12:52:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.928 12:52:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.928 12:52:09 -- paths/export.sh@5 -- # export PATH 00:24:36.928 12:52:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.928 12:52:09 -- nvmf/common.sh@46 -- # : 0 00:24:36.928 12:52:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:36.928 12:52:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:36.928 12:52:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:36.928 12:52:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.928 12:52:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.928 12:52:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:36.928 12:52:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:36.928 12:52:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:36.928 12:52:09 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:36.928 12:52:09 -- host/async_init.sh@14 -- # null_block_size=512 00:24:36.928 12:52:09 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:36.928 12:52:09 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:36.928 12:52:09 -- host/async_init.sh@20 -- # uuidgen 00:24:36.928 12:52:09 -- host/async_init.sh@20 -- # tr -d - 00:24:36.928 12:52:09 -- host/async_init.sh@20 -- # nguid=4cddb5a2b65141fbb6d451c07c729763 00:24:36.928 12:52:09 -- host/async_init.sh@22 -- # nvmftestinit 00:24:36.928 12:52:09 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:36.928 12:52:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.928 12:52:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:36.928 12:52:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:36.928 12:52:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:36.928 12:52:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.928 12:52:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.928 12:52:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.928 12:52:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:36.928 12:52:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:36.928 12:52:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:36.928 12:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:45.078 12:52:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:45.078 12:52:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:45.078 12:52:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:45.078 12:52:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:45.078 12:52:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:45.078 12:52:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:45.079 12:52:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:45.079 12:52:16 -- nvmf/common.sh@294 -- # net_devs=() 00:24:45.079 12:52:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:45.079 12:52:16 -- nvmf/common.sh@295 -- # e810=() 00:24:45.079 12:52:16 -- nvmf/common.sh@295 -- # local -ga e810 00:24:45.079 12:52:16 -- nvmf/common.sh@296 -- # x722=() 00:24:45.079 12:52:16 -- nvmf/common.sh@296 -- # local -ga x722 00:24:45.079 12:52:16 -- nvmf/common.sh@297 -- # mlx=() 00:24:45.079 12:52:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:45.079 12:52:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.079 12:52:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:45.079 12:52:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:45.079 12:52:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:45.079 12:52:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:45.079 12:52:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:45.079 12:52:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:45.079 12:52:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:45.079 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:45.079 12:52:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:45.079 12:52:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:45.079 12:52:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:45.079 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:45.079 12:52:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:45.079 12:52:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:45.079 12:52:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:45.079 12:52:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.079 12:52:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:45.079 12:52:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.079 12:52:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:45.079 Found net devices under 0000:98:00.0: mlx_0_0 00:24:45.079 12:52:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.079 12:52:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:45.079 12:52:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.079 12:52:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:45.079 12:52:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.079 12:52:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:45.079 Found net devices under 0000:98:00.1: mlx_0_1 00:24:45.079 12:52:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.079 12:52:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:45.079 12:52:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:45.079 12:52:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:45.079 12:52:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:45.079 12:52:16 -- nvmf/common.sh@57 -- # uname 00:24:45.079 12:52:16 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:45.079 12:52:16 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:45.079 12:52:16 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:45.079 12:52:16 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:45.079 12:52:16 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:45.079 12:52:16 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:45.079 12:52:16 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:45.079 12:52:16 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:45.079 12:52:16 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:45.079 12:52:16 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:45.079 12:52:16 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:45.079 12:52:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:45.079 12:52:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:45.079 12:52:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:45.079 12:52:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:45.079 12:52:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:45.079 12:52:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:45.079 12:52:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:45.079 12:52:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:45.079 12:52:16 -- nvmf/common.sh@104 -- # continue 2 00:24:45.079 12:52:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:45.079 12:52:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:45.079 12:52:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:45.079 12:52:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:45.079 12:52:16 -- nvmf/common.sh@104 -- # continue 2 00:24:45.079 12:52:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:45.079 12:52:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:45.079 12:52:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:45.079 12:52:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:45.079 12:52:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:45.079 12:52:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:45.079 12:52:16 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:45.079 12:52:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:45.079 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:45.079 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:24:45.079 altname enp152s0f0np0 00:24:45.079 altname ens817f0np0 00:24:45.079 inet 192.168.100.8/24 scope global mlx_0_0 00:24:45.079 valid_lft forever preferred_lft forever 00:24:45.079 12:52:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:45.079 12:52:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:45.079 12:52:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:45.079 12:52:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:45.079 12:52:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:45.079 12:52:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:45.079 12:52:16 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:45.079 12:52:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:45.079 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:45.079 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:24:45.079 altname enp152s0f1np1 00:24:45.079 altname ens817f1np1 00:24:45.079 inet 192.168.100.9/24 scope global mlx_0_1 00:24:45.079 valid_lft forever preferred_lft forever 00:24:45.079 12:52:16 -- nvmf/common.sh@410 -- # return 0 00:24:45.079 12:52:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:45.079 12:52:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:45.079 12:52:16 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:45.079 12:52:16 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:45.079 12:52:17 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:45.079 12:52:17 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:45.079 12:52:17 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:45.079 12:52:17 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:45.079 12:52:17 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:45.079 12:52:17 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:45.079 12:52:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:45.079 12:52:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:45.079 12:52:17 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:45.079 12:52:17 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:45.079 12:52:17 -- nvmf/common.sh@104 -- # continue 2 00:24:45.079 12:52:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:45.079 12:52:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:45.079 12:52:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:45.079 12:52:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:45.079 12:52:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:45.079 12:52:17 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:45.079 12:52:17 -- nvmf/common.sh@104 -- # continue 2 00:24:45.079 12:52:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:45.079 12:52:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:45.079 12:52:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:45.079 12:52:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:45.079 12:52:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:45.079 12:52:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:45.079 12:52:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:45.080 12:52:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:45.080 12:52:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:45.080 12:52:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:45.080 12:52:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:45.080 12:52:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:45.080 12:52:17 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:45.080 192.168.100.9' 00:24:45.080 12:52:17 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:45.080 192.168.100.9' 00:24:45.080 12:52:17 -- nvmf/common.sh@445 -- # head -n 1 00:24:45.080 12:52:17 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:45.080 12:52:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:45.080 192.168.100.9' 00:24:45.080 12:52:17 -- nvmf/common.sh@446 -- # tail -n +2 00:24:45.080 12:52:17 -- nvmf/common.sh@446 -- # head -n 1 00:24:45.080 12:52:17 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:45.080 12:52:17 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:45.080 12:52:17 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:45.080 12:52:17 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:45.080 12:52:17 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:45.080 12:52:17 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:45.080 12:52:17 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:45.080 12:52:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:45.080 12:52:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.080 12:52:17 -- common/autotest_common.sh@10 -- # set +x 00:24:45.080 12:52:17 -- nvmf/common.sh@469 -- # nvmfpid=621620 00:24:45.080 12:52:17 -- nvmf/common.sh@470 -- # waitforlisten 621620 00:24:45.080 12:52:17 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:45.080 12:52:17 -- common/autotest_common.sh@829 -- # '[' -z 621620 ']' 00:24:45.080 12:52:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.080 12:52:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.080 12:52:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.080 12:52:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.080 12:52:17 -- common/autotest_common.sh@10 -- # set +x 00:24:45.080 [2024-11-20 12:52:17.144358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:45.080 [2024-11-20 12:52:17.144423] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.080 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.080 [2024-11-20 12:52:17.208827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.080 [2024-11-20 12:52:17.280262] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:45.080 [2024-11-20 12:52:17.280382] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.080 [2024-11-20 12:52:17.280390] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.080 [2024-11-20 12:52:17.280398] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.080 [2024-11-20 12:52:17.280417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.080 12:52:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.080 12:52:17 -- common/autotest_common.sh@862 -- # return 0 00:24:45.080 12:52:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:45.080 12:52:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:45.080 12:52:17 -- common/autotest_common.sh@10 -- # set +x 00:24:45.080 12:52:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.080 12:52:17 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:45.080 12:52:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.080 12:52:17 -- common/autotest_common.sh@10 -- # set +x 00:24:45.080 [2024-11-20 12:52:17.992800] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e2c650/0x1e30b40) succeed. 00:24:45.080 [2024-11-20 12:52:18.006025] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e2db50/0x1e721e0) succeed. 00:24:45.080 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.080 12:52:18 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:45.080 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.080 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.080 null0 00:24:45.080 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.080 12:52:18 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:45.080 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.080 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.080 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.080 12:52:18 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:45.080 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.080 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.080 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.080 12:52:18 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4cddb5a2b65141fbb6d451c07c729763 00:24:45.080 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.080 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.080 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.080 12:52:18 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:45.080 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.080 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.080 [2024-11-20 12:52:18.107291] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:45.080 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.080 12:52:18 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:45.080 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.080 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.341 nvme0n1 00:24:45.341 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.341 12:52:18 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:45.341 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.341 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.341 [ 00:24:45.341 { 00:24:45.341 "name": "nvme0n1", 00:24:45.341 "aliases": [ 00:24:45.341 "4cddb5a2-b651-41fb-b6d4-51c07c729763" 00:24:45.341 ], 00:24:45.341 "product_name": "NVMe disk", 00:24:45.341 "block_size": 512, 00:24:45.341 "num_blocks": 2097152, 00:24:45.341 "uuid": "4cddb5a2-b651-41fb-b6d4-51c07c729763", 00:24:45.341 "assigned_rate_limits": { 00:24:45.341 "rw_ios_per_sec": 0, 00:24:45.341 "rw_mbytes_per_sec": 0, 00:24:45.341 "r_mbytes_per_sec": 0, 00:24:45.341 "w_mbytes_per_sec": 0 00:24:45.341 }, 00:24:45.341 "claimed": false, 00:24:45.341 "zoned": false, 00:24:45.341 "supported_io_types": { 00:24:45.341 "read": true, 00:24:45.341 "write": true, 00:24:45.341 "unmap": false, 00:24:45.341 "write_zeroes": true, 00:24:45.341 "flush": true, 00:24:45.341 "reset": true, 00:24:45.341 "compare": true, 00:24:45.341 "compare_and_write": true, 00:24:45.341 "abort": true, 00:24:45.341 "nvme_admin": true, 00:24:45.341 "nvme_io": true 00:24:45.341 }, 00:24:45.341 "memory_domains": [ 00:24:45.341 { 00:24:45.341 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:45.341 "dma_device_type": 0 00:24:45.341 } 00:24:45.341 ], 00:24:45.341 "driver_specific": { 00:24:45.341 "nvme": [ 00:24:45.341 { 00:24:45.341 "trid": { 00:24:45.341 "trtype": "RDMA", 00:24:45.341 "adrfam": "IPv4", 00:24:45.341 "traddr": "192.168.100.8", 00:24:45.341 "trsvcid": "4420", 00:24:45.341 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:45.341 }, 00:24:45.341 "ctrlr_data": { 00:24:45.341 "cntlid": 1, 00:24:45.341 "vendor_id": "0x8086", 00:24:45.341 "model_number": "SPDK bdev Controller", 00:24:45.341 "serial_number": "00000000000000000000", 00:24:45.341 "firmware_revision": "24.01.1", 00:24:45.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.341 "oacs": { 00:24:45.341 "security": 0, 00:24:45.341 "format": 0, 00:24:45.341 "firmware": 0, 00:24:45.341 "ns_manage": 0 00:24:45.341 }, 00:24:45.341 "multi_ctrlr": true, 00:24:45.341 "ana_reporting": false 00:24:45.341 }, 00:24:45.341 "vs": { 00:24:45.341 "nvme_version": "1.3" 00:24:45.341 }, 00:24:45.341 "ns_data": { 00:24:45.341 "id": 1, 00:24:45.341 "can_share": true 00:24:45.341 } 00:24:45.341 } 00:24:45.341 ], 00:24:45.341 "mp_policy": "active_passive" 00:24:45.341 } 00:24:45.341 } 00:24:45.341 ] 00:24:45.341 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.341 12:52:18 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:45.341 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.341 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.341 [2024-11-20 12:52:18.235213] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:45.341 [2024-11-20 12:52:18.266205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.341 [2024-11-20 12:52:18.293726] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:45.341 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.341 12:52:18 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:45.341 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.341 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.341 [ 00:24:45.341 { 00:24:45.341 "name": "nvme0n1", 00:24:45.341 "aliases": [ 00:24:45.341 "4cddb5a2-b651-41fb-b6d4-51c07c729763" 00:24:45.341 ], 00:24:45.341 "product_name": "NVMe disk", 00:24:45.341 "block_size": 512, 00:24:45.341 "num_blocks": 2097152, 00:24:45.341 "uuid": "4cddb5a2-b651-41fb-b6d4-51c07c729763", 00:24:45.342 "assigned_rate_limits": { 00:24:45.342 "rw_ios_per_sec": 0, 00:24:45.342 "rw_mbytes_per_sec": 0, 00:24:45.342 "r_mbytes_per_sec": 0, 00:24:45.342 "w_mbytes_per_sec": 0 00:24:45.342 }, 00:24:45.342 "claimed": false, 00:24:45.342 "zoned": false, 00:24:45.342 "supported_io_types": { 00:24:45.342 "read": true, 00:24:45.342 "write": true, 00:24:45.342 "unmap": false, 00:24:45.342 "write_zeroes": true, 00:24:45.342 "flush": true, 00:24:45.342 "reset": true, 00:24:45.342 "compare": true, 00:24:45.342 "compare_and_write": true, 00:24:45.342 "abort": true, 00:24:45.342 "nvme_admin": true, 00:24:45.342 "nvme_io": true 00:24:45.342 }, 00:24:45.342 "memory_domains": [ 00:24:45.342 { 00:24:45.342 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:45.342 "dma_device_type": 0 00:24:45.342 } 00:24:45.342 ], 00:24:45.342 "driver_specific": { 00:24:45.342 "nvme": [ 00:24:45.342 { 00:24:45.342 "trid": { 00:24:45.342 "trtype": "RDMA", 00:24:45.342 "adrfam": "IPv4", 00:24:45.342 "traddr": "192.168.100.8", 00:24:45.342 "trsvcid": "4420", 00:24:45.342 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:45.342 }, 00:24:45.342 "ctrlr_data": { 00:24:45.342 "cntlid": 2, 00:24:45.342 "vendor_id": "0x8086", 00:24:45.342 "model_number": "SPDK bdev Controller", 00:24:45.342 "serial_number": "00000000000000000000", 00:24:45.342 "firmware_revision": "24.01.1", 00:24:45.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.342 "oacs": { 00:24:45.342 "security": 0, 00:24:45.342 "format": 0, 00:24:45.342 "firmware": 0, 00:24:45.342 "ns_manage": 0 00:24:45.342 }, 00:24:45.342 "multi_ctrlr": true, 00:24:45.342 "ana_reporting": false 00:24:45.342 }, 00:24:45.342 "vs": { 00:24:45.342 "nvme_version": "1.3" 00:24:45.342 }, 00:24:45.342 "ns_data": { 00:24:45.342 "id": 1, 00:24:45.342 "can_share": true 00:24:45.342 } 00:24:45.342 } 00:24:45.342 ], 00:24:45.342 "mp_policy": "active_passive" 00:24:45.342 } 00:24:45.342 } 00:24:45.342 ] 00:24:45.342 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.342 12:52:18 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.342 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.342 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.342 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.342 12:52:18 -- host/async_init.sh@53 -- # mktemp 00:24:45.342 12:52:18 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4VHszmUXG7 00:24:45.342 12:52:18 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:45.342 12:52:18 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4VHszmUXG7 00:24:45.342 12:52:18 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:45.342 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.342 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.342 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.342 12:52:18 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:24:45.342 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.342 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.342 [2024-11-20 12:52:18.378911] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:45.342 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.342 12:52:18 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4VHszmUXG7 00:24:45.342 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.342 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.342 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.342 12:52:18 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4VHszmUXG7 00:24:45.342 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.342 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.342 [2024-11-20 12:52:18.402957] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.604 nvme0n1 00:24:45.604 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.604 12:52:18 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:45.604 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.604 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.604 [ 00:24:45.604 { 00:24:45.604 "name": "nvme0n1", 00:24:45.604 "aliases": [ 00:24:45.604 "4cddb5a2-b651-41fb-b6d4-51c07c729763" 00:24:45.604 ], 00:24:45.604 "product_name": "NVMe disk", 00:24:45.604 "block_size": 512, 00:24:45.604 "num_blocks": 2097152, 00:24:45.604 "uuid": "4cddb5a2-b651-41fb-b6d4-51c07c729763", 00:24:45.604 "assigned_rate_limits": { 00:24:45.604 "rw_ios_per_sec": 0, 00:24:45.604 "rw_mbytes_per_sec": 0, 00:24:45.604 "r_mbytes_per_sec": 0, 00:24:45.604 "w_mbytes_per_sec": 0 00:24:45.604 }, 00:24:45.604 "claimed": false, 00:24:45.604 "zoned": false, 00:24:45.604 "supported_io_types": { 00:24:45.604 "read": true, 00:24:45.604 "write": true, 00:24:45.604 "unmap": false, 00:24:45.604 "write_zeroes": true, 00:24:45.604 "flush": true, 00:24:45.604 "reset": true, 00:24:45.604 "compare": true, 00:24:45.604 "compare_and_write": true, 00:24:45.604 "abort": true, 00:24:45.604 "nvme_admin": true, 00:24:45.604 "nvme_io": true 00:24:45.604 }, 00:24:45.604 "memory_domains": [ 00:24:45.604 { 00:24:45.604 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:45.604 "dma_device_type": 0 00:24:45.604 } 00:24:45.604 ], 00:24:45.604 "driver_specific": { 00:24:45.604 "nvme": [ 00:24:45.604 { 00:24:45.604 "trid": { 00:24:45.604 "trtype": "RDMA", 00:24:45.604 "adrfam": "IPv4", 00:24:45.604 "traddr": "192.168.100.8", 00:24:45.604 "trsvcid": "4421", 00:24:45.604 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:45.604 }, 00:24:45.604 "ctrlr_data": { 00:24:45.604 "cntlid": 3, 00:24:45.604 "vendor_id": "0x8086", 00:24:45.604 "model_number": "SPDK bdev Controller", 00:24:45.604 "serial_number": "00000000000000000000", 00:24:45.604 "firmware_revision": "24.01.1", 00:24:45.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.604 "oacs": { 00:24:45.604 "security": 0, 00:24:45.604 "format": 0, 00:24:45.604 "firmware": 0, 00:24:45.604 "ns_manage": 0 00:24:45.604 }, 00:24:45.604 "multi_ctrlr": true, 00:24:45.604 "ana_reporting": false 00:24:45.604 }, 00:24:45.604 "vs": { 00:24:45.604 "nvme_version": "1.3" 00:24:45.604 }, 00:24:45.604 "ns_data": { 00:24:45.604 "id": 1, 00:24:45.604 "can_share": true 00:24:45.604 } 00:24:45.604 } 00:24:45.604 ], 00:24:45.604 "mp_policy": "active_passive" 00:24:45.604 } 00:24:45.604 } 00:24:45.604 ] 00:24:45.604 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.604 12:52:18 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.604 12:52:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.604 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.604 12:52:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.604 12:52:18 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.4VHszmUXG7 00:24:45.604 12:52:18 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:45.604 12:52:18 -- host/async_init.sh@78 -- # nvmftestfini 00:24:45.604 12:52:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:45.604 12:52:18 -- nvmf/common.sh@116 -- # sync 00:24:45.604 12:52:18 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:45.604 12:52:18 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:45.604 12:52:18 -- nvmf/common.sh@119 -- # set +e 00:24:45.604 12:52:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:45.604 12:52:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:45.604 rmmod nvme_rdma 00:24:45.604 rmmod nvme_fabrics 00:24:45.604 12:52:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:45.604 12:52:18 -- nvmf/common.sh@123 -- # set -e 00:24:45.604 12:52:18 -- nvmf/common.sh@124 -- # return 0 00:24:45.604 12:52:18 -- nvmf/common.sh@477 -- # '[' -n 621620 ']' 00:24:45.604 12:52:18 -- nvmf/common.sh@478 -- # killprocess 621620 00:24:45.604 12:52:18 -- common/autotest_common.sh@936 -- # '[' -z 621620 ']' 00:24:45.604 12:52:18 -- common/autotest_common.sh@940 -- # kill -0 621620 00:24:45.604 12:52:18 -- common/autotest_common.sh@941 -- # uname 00:24:45.604 12:52:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:45.604 12:52:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 621620 00:24:45.604 12:52:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:45.604 12:52:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:45.604 12:52:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 621620' 00:24:45.604 killing process with pid 621620 00:24:45.604 12:52:18 -- common/autotest_common.sh@955 -- # kill 621620 00:24:45.604 12:52:18 -- common/autotest_common.sh@960 -- # wait 621620 00:24:45.865 12:52:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:45.865 12:52:18 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:45.865 00:24:45.865 real 0m9.095s 00:24:45.865 user 0m3.977s 00:24:45.865 sys 0m5.688s 00:24:45.865 12:52:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:45.865 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.865 ************************************ 00:24:45.865 END TEST nvmf_async_init 00:24:45.865 ************************************ 00:24:45.865 12:52:18 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:45.865 12:52:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:45.865 12:52:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:45.865 12:52:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.865 ************************************ 00:24:45.865 START TEST dma 00:24:45.865 ************************************ 00:24:45.865 12:52:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:45.865 * Looking for test storage... 00:24:45.865 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:45.865 12:52:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:45.865 12:52:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:45.865 12:52:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:46.128 12:52:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:46.128 12:52:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:46.128 12:52:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:46.128 12:52:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:46.128 12:52:19 -- scripts/common.sh@335 -- # IFS=.-: 00:24:46.128 12:52:19 -- scripts/common.sh@335 -- # read -ra ver1 00:24:46.128 12:52:19 -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.128 12:52:19 -- scripts/common.sh@336 -- # read -ra ver2 00:24:46.128 12:52:19 -- scripts/common.sh@337 -- # local 'op=<' 00:24:46.128 12:52:19 -- scripts/common.sh@339 -- # ver1_l=2 00:24:46.128 12:52:19 -- scripts/common.sh@340 -- # ver2_l=1 00:24:46.128 12:52:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:46.128 12:52:19 -- scripts/common.sh@343 -- # case "$op" in 00:24:46.128 12:52:19 -- scripts/common.sh@344 -- # : 1 00:24:46.128 12:52:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:46.128 12:52:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.128 12:52:19 -- scripts/common.sh@364 -- # decimal 1 00:24:46.128 12:52:19 -- scripts/common.sh@352 -- # local d=1 00:24:46.128 12:52:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.128 12:52:19 -- scripts/common.sh@354 -- # echo 1 00:24:46.128 12:52:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:46.128 12:52:19 -- scripts/common.sh@365 -- # decimal 2 00:24:46.128 12:52:19 -- scripts/common.sh@352 -- # local d=2 00:24:46.128 12:52:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.128 12:52:19 -- scripts/common.sh@354 -- # echo 2 00:24:46.128 12:52:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:46.128 12:52:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:46.128 12:52:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:46.128 12:52:19 -- scripts/common.sh@367 -- # return 0 00:24:46.128 12:52:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.128 12:52:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:46.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.128 --rc genhtml_branch_coverage=1 00:24:46.128 --rc genhtml_function_coverage=1 00:24:46.128 --rc genhtml_legend=1 00:24:46.128 --rc geninfo_all_blocks=1 00:24:46.128 --rc geninfo_unexecuted_blocks=1 00:24:46.128 00:24:46.128 ' 00:24:46.128 12:52:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:46.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.128 --rc genhtml_branch_coverage=1 00:24:46.128 --rc genhtml_function_coverage=1 00:24:46.128 --rc genhtml_legend=1 00:24:46.128 --rc geninfo_all_blocks=1 00:24:46.128 --rc geninfo_unexecuted_blocks=1 00:24:46.128 00:24:46.128 ' 00:24:46.128 12:52:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:46.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.128 --rc genhtml_branch_coverage=1 00:24:46.128 --rc genhtml_function_coverage=1 00:24:46.128 --rc genhtml_legend=1 00:24:46.128 --rc geninfo_all_blocks=1 00:24:46.128 --rc geninfo_unexecuted_blocks=1 00:24:46.128 00:24:46.128 ' 00:24:46.128 12:52:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:46.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.128 --rc genhtml_branch_coverage=1 00:24:46.128 --rc genhtml_function_coverage=1 00:24:46.128 --rc genhtml_legend=1 00:24:46.128 --rc geninfo_all_blocks=1 00:24:46.128 --rc geninfo_unexecuted_blocks=1 00:24:46.128 00:24:46.128 ' 00:24:46.128 12:52:19 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.128 12:52:19 -- nvmf/common.sh@7 -- # uname -s 00:24:46.128 12:52:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.128 12:52:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.128 12:52:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.128 12:52:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.128 12:52:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.128 12:52:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.128 12:52:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.128 12:52:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.128 12:52:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.128 12:52:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.128 12:52:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:46.128 12:52:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:46.128 12:52:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.128 12:52:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.128 12:52:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.128 12:52:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:46.128 12:52:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.128 12:52:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.128 12:52:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.128 12:52:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.128 12:52:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.128 12:52:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.128 12:52:19 -- paths/export.sh@5 -- # export PATH 00:24:46.128 12:52:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.128 12:52:19 -- nvmf/common.sh@46 -- # : 0 00:24:46.128 12:52:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:46.128 12:52:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:46.128 12:52:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:46.128 12:52:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.128 12:52:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.128 12:52:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:46.128 12:52:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:46.128 12:52:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:46.128 12:52:19 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:24:46.128 12:52:19 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:24:46.128 12:52:19 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:24:46.128 12:52:19 -- host/dma.sh@18 -- # subsystem=0 00:24:46.128 12:52:19 -- host/dma.sh@93 -- # nvmftestinit 00:24:46.128 12:52:19 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:46.128 12:52:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.128 12:52:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:46.128 12:52:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:46.128 12:52:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:46.128 12:52:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.128 12:52:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.128 12:52:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.128 12:52:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:46.128 12:52:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:46.128 12:52:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:46.128 12:52:19 -- common/autotest_common.sh@10 -- # set +x 00:24:54.270 12:52:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:54.270 12:52:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:54.270 12:52:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:54.270 12:52:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:54.270 12:52:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:54.270 12:52:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:54.270 12:52:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:54.270 12:52:25 -- nvmf/common.sh@294 -- # net_devs=() 00:24:54.271 12:52:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:54.271 12:52:25 -- nvmf/common.sh@295 -- # e810=() 00:24:54.271 12:52:25 -- nvmf/common.sh@295 -- # local -ga e810 00:24:54.271 12:52:25 -- nvmf/common.sh@296 -- # x722=() 00:24:54.271 12:52:25 -- nvmf/common.sh@296 -- # local -ga x722 00:24:54.271 12:52:25 -- nvmf/common.sh@297 -- # mlx=() 00:24:54.271 12:52:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:54.271 12:52:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.271 12:52:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:54.271 12:52:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:54.271 12:52:26 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:54.271 12:52:26 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:54.271 12:52:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:54.271 12:52:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:54.271 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:54.271 12:52:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:54.271 12:52:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:54.271 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:54.271 12:52:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:54.271 12:52:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:54.271 12:52:26 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.271 12:52:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:54.271 12:52:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.271 12:52:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:54.271 Found net devices under 0000:98:00.0: mlx_0_0 00:24:54.271 12:52:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.271 12:52:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.271 12:52:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:54.271 12:52:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.271 12:52:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:54.271 Found net devices under 0000:98:00.1: mlx_0_1 00:24:54.271 12:52:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.271 12:52:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:54.271 12:52:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:54.271 12:52:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:54.271 12:52:26 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:54.271 12:52:26 -- nvmf/common.sh@57 -- # uname 00:24:54.271 12:52:26 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:54.271 12:52:26 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:54.271 12:52:26 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:54.271 12:52:26 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:54.271 12:52:26 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:54.271 12:52:26 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:54.271 12:52:26 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:54.271 12:52:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:54.271 12:52:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:54.271 12:52:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:54.271 12:52:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:54.271 12:52:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:54.271 12:52:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:54.271 12:52:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:54.271 12:52:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:54.271 12:52:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:54.271 12:52:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:54.271 12:52:26 -- nvmf/common.sh@104 -- # continue 2 00:24:54.271 12:52:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:54.271 12:52:26 -- nvmf/common.sh@104 -- # continue 2 00:24:54.271 12:52:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:54.271 12:52:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:54.271 12:52:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:54.271 12:52:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:54.271 12:52:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:54.271 12:52:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:54.271 12:52:26 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:54.271 12:52:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:54.271 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:54.271 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:24:54.271 altname enp152s0f0np0 00:24:54.271 altname ens817f0np0 00:24:54.271 inet 192.168.100.8/24 scope global mlx_0_0 00:24:54.271 valid_lft forever preferred_lft forever 00:24:54.271 12:52:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:54.271 12:52:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:54.271 12:52:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:54.271 12:52:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:54.271 12:52:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:54.271 12:52:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:54.271 12:52:26 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:54.271 12:52:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:54.271 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:54.271 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:24:54.271 altname enp152s0f1np1 00:24:54.271 altname ens817f1np1 00:24:54.271 inet 192.168.100.9/24 scope global mlx_0_1 00:24:54.271 valid_lft forever preferred_lft forever 00:24:54.271 12:52:26 -- nvmf/common.sh@410 -- # return 0 00:24:54.271 12:52:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:54.271 12:52:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:54.271 12:52:26 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:54.271 12:52:26 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:54.271 12:52:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:54.271 12:52:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:54.271 12:52:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:54.271 12:52:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:54.271 12:52:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:54.271 12:52:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:54.271 12:52:26 -- nvmf/common.sh@104 -- # continue 2 00:24:54.271 12:52:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.271 12:52:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:54.271 12:52:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:54.271 12:52:26 -- nvmf/common.sh@104 -- # continue 2 00:24:54.271 12:52:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:54.271 12:52:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:54.271 12:52:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:54.271 12:52:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:54.271 12:52:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:54.271 12:52:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:54.271 12:52:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:54.272 12:52:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:54.272 12:52:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:54.272 12:52:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:54.272 12:52:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:54.272 12:52:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:54.272 12:52:26 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:54.272 192.168.100.9' 00:24:54.272 12:52:26 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:54.272 192.168.100.9' 00:24:54.272 12:52:26 -- nvmf/common.sh@445 -- # head -n 1 00:24:54.272 12:52:26 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:54.272 12:52:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:54.272 192.168.100.9' 00:24:54.272 12:52:26 -- nvmf/common.sh@446 -- # tail -n +2 00:24:54.272 12:52:26 -- nvmf/common.sh@446 -- # head -n 1 00:24:54.272 12:52:26 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:54.272 12:52:26 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:54.272 12:52:26 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:54.272 12:52:26 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:54.272 12:52:26 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:54.272 12:52:26 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:54.272 12:52:26 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:24:54.272 12:52:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:54.272 12:52:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.272 12:52:26 -- common/autotest_common.sh@10 -- # set +x 00:24:54.272 12:52:26 -- nvmf/common.sh@469 -- # nvmfpid=625680 00:24:54.272 12:52:26 -- nvmf/common.sh@470 -- # waitforlisten 625680 00:24:54.272 12:52:26 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:54.272 12:52:26 -- common/autotest_common.sh@829 -- # '[' -z 625680 ']' 00:24:54.272 12:52:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.272 12:52:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.272 12:52:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.272 12:52:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.272 12:52:26 -- common/autotest_common.sh@10 -- # set +x 00:24:54.272 [2024-11-20 12:52:26.287651] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:54.272 [2024-11-20 12:52:26.287703] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.272 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.272 [2024-11-20 12:52:26.349578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:54.272 [2024-11-20 12:52:26.412427] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:54.272 [2024-11-20 12:52:26.412549] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.272 [2024-11-20 12:52:26.412557] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.272 [2024-11-20 12:52:26.412565] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.272 [2024-11-20 12:52:26.412700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.272 [2024-11-20 12:52:26.412701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.272 12:52:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.272 12:52:27 -- common/autotest_common.sh@862 -- # return 0 00:24:54.272 12:52:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:54.272 12:52:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:54.272 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.272 12:52:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.272 12:52:27 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:54.272 12:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.272 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.272 [2024-11-20 12:52:27.129445] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b0f1a0/0x1b13690) succeed. 00:24:54.272 [2024-11-20 12:52:27.142552] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b106a0/0x1b54d30) succeed. 00:24:54.272 12:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.272 12:52:27 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:24:54.272 12:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.272 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.272 Malloc0 00:24:54.272 12:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.272 12:52:27 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:54.272 12:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.272 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.272 12:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.272 12:52:27 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:54.272 12:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.272 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.272 12:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.272 12:52:27 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:54.272 12:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.272 12:52:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.272 [2024-11-20 12:52:27.305695] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:54.272 12:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.272 12:52:27 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:24:54.272 12:52:27 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:24:54.272 12:52:27 -- nvmf/common.sh@520 -- # config=() 00:24:54.272 12:52:27 -- nvmf/common.sh@520 -- # local subsystem config 00:24:54.272 12:52:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.272 12:52:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.272 { 00:24:54.272 "params": { 00:24:54.272 "name": "Nvme$subsystem", 00:24:54.272 "trtype": "$TEST_TRANSPORT", 00:24:54.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.272 "adrfam": "ipv4", 00:24:54.272 "trsvcid": "$NVMF_PORT", 00:24:54.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.272 "hdgst": ${hdgst:-false}, 00:24:54.272 "ddgst": ${ddgst:-false} 00:24:54.272 }, 00:24:54.272 "method": "bdev_nvme_attach_controller" 00:24:54.272 } 00:24:54.272 EOF 00:24:54.272 )") 00:24:54.272 12:52:27 -- nvmf/common.sh@542 -- # cat 00:24:54.272 12:52:27 -- nvmf/common.sh@544 -- # jq . 00:24:54.272 12:52:27 -- nvmf/common.sh@545 -- # IFS=, 00:24:54.272 12:52:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:54.272 "params": { 00:24:54.272 "name": "Nvme0", 00:24:54.272 "trtype": "rdma", 00:24:54.272 "traddr": "192.168.100.8", 00:24:54.272 "adrfam": "ipv4", 00:24:54.272 "trsvcid": "4420", 00:24:54.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:54.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:54.272 "hdgst": false, 00:24:54.272 "ddgst": false 00:24:54.272 }, 00:24:54.272 "method": "bdev_nvme_attach_controller" 00:24:54.272 }' 00:24:54.272 [2024-11-20 12:52:27.355830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:54.272 [2024-11-20 12:52:27.355876] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626033 ] 00:24:54.533 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.533 [2024-11-20 12:52:27.405712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:54.533 [2024-11-20 12:52:27.458212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.533 [2024-11-20 12:52:27.458306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.826 bdev Nvme0n1 reports 1 memory domains 00:24:59.826 bdev Nvme0n1 supports RDMA memory domain 00:24:59.826 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:59.826 ========================================================================== 00:24:59.826 Latency [us] 00:24:59.826 IOPS MiB/s Average min max 00:24:59.826 Core 2: 24284.39 94.86 658.16 281.83 8932.20 00:24:59.826 Core 3: 26398.10 103.12 605.35 164.72 9019.51 00:24:59.826 ========================================================================== 00:24:59.826 Total : 50682.48 197.98 630.65 164.72 9019.51 00:24:59.826 00:24:59.826 Total operations: 253519, translate 253519 pull_push 0 memzero 0 00:24:59.826 12:52:32 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:24:59.826 12:52:32 -- host/dma.sh@107 -- # gen_malloc_json 00:24:59.826 12:52:32 -- host/dma.sh@21 -- # jq . 00:24:59.826 [2024-11-20 12:52:32.817730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:59.826 [2024-11-20 12:52:32.817786] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627058 ] 00:24:59.826 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.826 [2024-11-20 12:52:32.868257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:59.826 [2024-11-20 12:52:32.919286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.826 [2024-11-20 12:52:32.919287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.114 bdev Malloc0 reports 1 memory domains 00:25:05.114 bdev Malloc0 doesn't support RDMA memory domain 00:25:05.114 Initialization complete, running randrw IO for 5 sec on 2 cores 00:25:05.114 ========================================================================== 00:25:05.114 Latency [us] 00:25:05.114 IOPS MiB/s Average min max 00:25:05.114 Core 2: 19483.01 76.11 820.69 250.58 1210.21 00:25:05.114 Core 3: 19730.55 77.07 810.38 246.50 1388.49 00:25:05.114 ========================================================================== 00:25:05.114 Total : 39213.56 153.18 815.50 246.50 1388.49 00:25:05.114 00:25:05.114 Total operations: 196114, translate 0 pull_push 784456 memzero 0 00:25:05.114 12:52:38 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:25:05.114 12:52:38 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:25:05.114 12:52:38 -- host/dma.sh@48 -- # local subsystem=0 00:25:05.114 12:52:38 -- host/dma.sh@50 -- # jq . 00:25:05.114 Ignoring -M option 00:25:05.114 [2024-11-20 12:52:38.163587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:05.114 [2024-11-20 12:52:38.163643] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628073 ] 00:25:05.114 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.114 [2024-11-20 12:52:38.213857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:05.376 [2024-11-20 12:52:38.263888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.376 [2024-11-20 12:52:38.263889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.376 [2024-11-20 12:52:38.447716] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:25:10.662 [2024-11-20 12:52:43.476092] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:25:10.662 bdev 7f73defe-8fff-4c8f-979a-5f6672b36ab4 reports 1 memory domains 00:25:10.662 bdev 7f73defe-8fff-4c8f-979a-5f6672b36ab4 supports RDMA memory domain 00:25:10.662 Initialization complete, running randread IO for 5 sec on 2 cores 00:25:10.662 ========================================================================== 00:25:10.662 Latency [us] 00:25:10.662 IOPS MiB/s Average min max 00:25:10.662 Core 2: 129253.38 504.90 123.27 53.63 1976.49 00:25:10.662 Core 3: 136973.04 535.05 116.32 48.62 2110.12 00:25:10.662 ========================================================================== 00:25:10.662 Total : 266226.42 1039.95 119.69 48.62 2110.12 00:25:10.662 00:25:10.662 Total operations: 1331225, translate 0 pull_push 0 memzero 1331225 00:25:10.662 12:52:43 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:25:10.662 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.662 [2024-11-20 12:52:43.749648] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:13.211 Initializing NVMe Controllers 00:25:13.211 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:25:13.211 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:25:13.211 Initialization complete. Launching workers. 00:25:13.211 ======================================================== 00:25:13.211 Latency(us) 00:25:13.211 Device Information : IOPS MiB/s Average min max 00:25:13.211 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.87 7972.79 6933.26 9029.63 00:25:13.211 ======================================================== 00:25:13.211 Total : 2016.00 7.87 7972.79 6933.26 9029.63 00:25:13.211 00:25:13.211 12:52:46 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:25:13.211 12:52:46 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:25:13.211 12:52:46 -- host/dma.sh@48 -- # local subsystem=0 00:25:13.211 12:52:46 -- host/dma.sh@50 -- # jq . 00:25:13.211 [2024-11-20 12:52:46.124211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:13.211 [2024-11-20 12:52:46.124260] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629567 ] 00:25:13.211 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.211 [2024-11-20 12:52:46.174334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:13.211 [2024-11-20 12:52:46.225244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.211 [2024-11-20 12:52:46.225333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.473 [2024-11-20 12:52:46.411520] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:25:18.763 [2024-11-20 12:52:51.438708] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:25:18.763 bdev 8dd7973d-97a1-49da-aaaf-8b9f8b0248e8 reports 1 memory domains 00:25:18.763 bdev 8dd7973d-97a1-49da-aaaf-8b9f8b0248e8 supports RDMA memory domain 00:25:18.763 Initialization complete, running randrw IO for 5 sec on 2 cores 00:25:18.763 ========================================================================== 00:25:18.763 Latency [us] 00:25:18.763 IOPS MiB/s Average min max 00:25:18.763 Core 2: 21452.69 83.80 745.33 11.04 13205.64 00:25:18.763 Core 3: 27610.58 107.85 578.96 7.67 12947.05 00:25:18.763 ========================================================================== 00:25:18.763 Total : 49063.27 191.65 651.70 7.67 13205.64 00:25:18.763 00:25:18.763 Total operations: 245345, translate 245242 pull_push 0 memzero 103 00:25:18.763 12:52:51 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:18.763 12:52:51 -- host/dma.sh@120 -- # nvmftestfini 00:25:18.763 12:52:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:18.763 12:52:51 -- nvmf/common.sh@116 -- # sync 00:25:18.763 12:52:51 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:18.763 12:52:51 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:18.763 12:52:51 -- nvmf/common.sh@119 -- # set +e 00:25:18.763 12:52:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:18.763 12:52:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:18.763 rmmod nvme_rdma 00:25:18.763 rmmod nvme_fabrics 00:25:18.763 12:52:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:18.763 12:52:51 -- nvmf/common.sh@123 -- # set -e 00:25:18.763 12:52:51 -- nvmf/common.sh@124 -- # return 0 00:25:18.763 12:52:51 -- nvmf/common.sh@477 -- # '[' -n 625680 ']' 00:25:18.763 12:52:51 -- nvmf/common.sh@478 -- # killprocess 625680 00:25:18.763 12:52:51 -- common/autotest_common.sh@936 -- # '[' -z 625680 ']' 00:25:18.763 12:52:51 -- common/autotest_common.sh@940 -- # kill -0 625680 00:25:18.763 12:52:51 -- common/autotest_common.sh@941 -- # uname 00:25:18.763 12:52:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:18.763 12:52:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 625680 00:25:18.763 12:52:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:18.763 12:52:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:18.763 12:52:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 625680' 00:25:18.763 killing process with pid 625680 00:25:18.763 12:52:51 -- common/autotest_common.sh@955 -- # kill 625680 00:25:18.763 12:52:51 -- common/autotest_common.sh@960 -- # wait 625680 00:25:19.024 12:52:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:19.024 12:52:51 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:19.024 00:25:19.024 real 0m33.074s 00:25:19.024 user 1m35.300s 00:25:19.024 sys 0m6.286s 00:25:19.024 12:52:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:19.024 12:52:51 -- common/autotest_common.sh@10 -- # set +x 00:25:19.024 ************************************ 00:25:19.024 END TEST dma 00:25:19.024 ************************************ 00:25:19.024 12:52:51 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:25:19.024 12:52:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:19.024 12:52:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:19.024 12:52:51 -- common/autotest_common.sh@10 -- # set +x 00:25:19.024 ************************************ 00:25:19.024 START TEST nvmf_identify 00:25:19.024 ************************************ 00:25:19.024 12:52:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:25:19.024 * Looking for test storage... 00:25:19.024 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:19.024 12:52:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:19.024 12:52:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:19.024 12:52:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:19.286 12:52:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:19.286 12:52:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:19.286 12:52:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:19.286 12:52:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:19.286 12:52:52 -- scripts/common.sh@335 -- # IFS=.-: 00:25:19.286 12:52:52 -- scripts/common.sh@335 -- # read -ra ver1 00:25:19.286 12:52:52 -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.286 12:52:52 -- scripts/common.sh@336 -- # read -ra ver2 00:25:19.286 12:52:52 -- scripts/common.sh@337 -- # local 'op=<' 00:25:19.286 12:52:52 -- scripts/common.sh@339 -- # ver1_l=2 00:25:19.286 12:52:52 -- scripts/common.sh@340 -- # ver2_l=1 00:25:19.286 12:52:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:19.286 12:52:52 -- scripts/common.sh@343 -- # case "$op" in 00:25:19.286 12:52:52 -- scripts/common.sh@344 -- # : 1 00:25:19.286 12:52:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:19.286 12:52:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.286 12:52:52 -- scripts/common.sh@364 -- # decimal 1 00:25:19.286 12:52:52 -- scripts/common.sh@352 -- # local d=1 00:25:19.286 12:52:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.286 12:52:52 -- scripts/common.sh@354 -- # echo 1 00:25:19.286 12:52:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:19.286 12:52:52 -- scripts/common.sh@365 -- # decimal 2 00:25:19.286 12:52:52 -- scripts/common.sh@352 -- # local d=2 00:25:19.286 12:52:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.286 12:52:52 -- scripts/common.sh@354 -- # echo 2 00:25:19.286 12:52:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:19.286 12:52:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:19.286 12:52:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:19.286 12:52:52 -- scripts/common.sh@367 -- # return 0 00:25:19.286 12:52:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.286 12:52:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:19.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.286 --rc genhtml_branch_coverage=1 00:25:19.286 --rc genhtml_function_coverage=1 00:25:19.286 --rc genhtml_legend=1 00:25:19.286 --rc geninfo_all_blocks=1 00:25:19.286 --rc geninfo_unexecuted_blocks=1 00:25:19.286 00:25:19.286 ' 00:25:19.286 12:52:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:19.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.286 --rc genhtml_branch_coverage=1 00:25:19.286 --rc genhtml_function_coverage=1 00:25:19.286 --rc genhtml_legend=1 00:25:19.286 --rc geninfo_all_blocks=1 00:25:19.286 --rc geninfo_unexecuted_blocks=1 00:25:19.286 00:25:19.286 ' 00:25:19.286 12:52:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:19.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.286 --rc genhtml_branch_coverage=1 00:25:19.286 --rc genhtml_function_coverage=1 00:25:19.286 --rc genhtml_legend=1 00:25:19.286 --rc geninfo_all_blocks=1 00:25:19.286 --rc geninfo_unexecuted_blocks=1 00:25:19.286 00:25:19.286 ' 00:25:19.286 12:52:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:19.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.286 --rc genhtml_branch_coverage=1 00:25:19.286 --rc genhtml_function_coverage=1 00:25:19.286 --rc genhtml_legend=1 00:25:19.286 --rc geninfo_all_blocks=1 00:25:19.286 --rc geninfo_unexecuted_blocks=1 00:25:19.286 00:25:19.286 ' 00:25:19.286 12:52:52 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.286 12:52:52 -- nvmf/common.sh@7 -- # uname -s 00:25:19.286 12:52:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.286 12:52:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.286 12:52:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.286 12:52:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.286 12:52:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.286 12:52:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.286 12:52:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.286 12:52:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.286 12:52:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.286 12:52:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.286 12:52:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:19.286 12:52:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:19.286 12:52:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.286 12:52:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.286 12:52:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.286 12:52:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:19.287 12:52:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.287 12:52:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.287 12:52:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.287 12:52:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.287 12:52:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.287 12:52:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.287 12:52:52 -- paths/export.sh@5 -- # export PATH 00:25:19.287 12:52:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.287 12:52:52 -- nvmf/common.sh@46 -- # : 0 00:25:19.287 12:52:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:19.287 12:52:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:19.287 12:52:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:19.287 12:52:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.287 12:52:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.287 12:52:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:19.287 12:52:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:19.287 12:52:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:19.287 12:52:52 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:19.287 12:52:52 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:19.287 12:52:52 -- host/identify.sh@14 -- # nvmftestinit 00:25:19.287 12:52:52 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:19.287 12:52:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.287 12:52:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:19.287 12:52:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:19.287 12:52:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:19.287 12:52:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.287 12:52:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.287 12:52:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.287 12:52:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:19.287 12:52:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:19.287 12:52:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:19.287 12:52:52 -- common/autotest_common.sh@10 -- # set +x 00:25:25.880 12:52:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:25.880 12:52:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:25.880 12:52:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:25.880 12:52:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:25.880 12:52:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:25.880 12:52:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:25.880 12:52:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:25.880 12:52:58 -- nvmf/common.sh@294 -- # net_devs=() 00:25:25.880 12:52:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:25.880 12:52:58 -- nvmf/common.sh@295 -- # e810=() 00:25:25.880 12:52:58 -- nvmf/common.sh@295 -- # local -ga e810 00:25:25.880 12:52:58 -- nvmf/common.sh@296 -- # x722=() 00:25:25.881 12:52:58 -- nvmf/common.sh@296 -- # local -ga x722 00:25:25.881 12:52:58 -- nvmf/common.sh@297 -- # mlx=() 00:25:25.881 12:52:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:25.881 12:52:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.881 12:52:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:25.881 12:52:58 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:25.881 12:52:58 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:25.881 12:52:58 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:25.881 12:52:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:25.881 12:52:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:25.881 12:52:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:25.881 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:25.881 12:52:58 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:25.881 12:52:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:25.881 12:52:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:25.881 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:25.881 12:52:58 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:25.881 12:52:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:25.881 12:52:58 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:25.881 12:52:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.881 12:52:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:25.881 12:52:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.881 12:52:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:25.881 Found net devices under 0000:98:00.0: mlx_0_0 00:25:25.881 12:52:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.881 12:52:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:25.881 12:52:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.881 12:52:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:25.881 12:52:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.881 12:52:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:25.881 Found net devices under 0000:98:00.1: mlx_0_1 00:25:25.881 12:52:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.881 12:52:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:25.881 12:52:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:25.881 12:52:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:25.881 12:52:58 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:25.881 12:52:58 -- nvmf/common.sh@57 -- # uname 00:25:25.881 12:52:58 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:25.881 12:52:58 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:25.881 12:52:58 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:25.881 12:52:58 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:25.881 12:52:58 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:25.881 12:52:58 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:25.881 12:52:58 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:25.881 12:52:58 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:25.881 12:52:58 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:25.881 12:52:58 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:25.881 12:52:58 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:25.881 12:52:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:25.881 12:52:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:25.881 12:52:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:25.881 12:52:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:25.881 12:52:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:25.881 12:52:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:25.881 12:52:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:25.881 12:52:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:25.881 12:52:58 -- nvmf/common.sh@104 -- # continue 2 00:25:25.881 12:52:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:25.881 12:52:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:25.881 12:52:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:25.881 12:52:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:25.881 12:52:58 -- nvmf/common.sh@104 -- # continue 2 00:25:25.881 12:52:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:25.881 12:52:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:25.881 12:52:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:25.881 12:52:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:25.881 12:52:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:25.881 12:52:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:25.881 12:52:58 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:25.881 12:52:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:25.881 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:25.881 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:25:25.881 altname enp152s0f0np0 00:25:25.881 altname ens817f0np0 00:25:25.881 inet 192.168.100.8/24 scope global mlx_0_0 00:25:25.881 valid_lft forever preferred_lft forever 00:25:25.881 12:52:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:25.881 12:52:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:25.881 12:52:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:25.881 12:52:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:25.881 12:52:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:25.881 12:52:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:25.881 12:52:58 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:25.881 12:52:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:25.881 12:52:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:25.881 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:25.881 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:25:25.881 altname enp152s0f1np1 00:25:25.882 altname ens817f1np1 00:25:25.882 inet 192.168.100.9/24 scope global mlx_0_1 00:25:25.882 valid_lft forever preferred_lft forever 00:25:25.882 12:52:58 -- nvmf/common.sh@410 -- # return 0 00:25:25.882 12:52:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:25.882 12:52:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:25.882 12:52:58 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:25.882 12:52:58 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:25.882 12:52:58 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:25.882 12:52:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:25.882 12:52:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:25.882 12:52:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:25.882 12:52:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:25.882 12:52:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:25.882 12:52:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:25.882 12:52:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:25.882 12:52:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:25.882 12:52:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:25.882 12:52:58 -- nvmf/common.sh@104 -- # continue 2 00:25:25.882 12:52:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:25.882 12:52:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:25.882 12:52:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:25.882 12:52:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:25.882 12:52:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:25.882 12:52:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:25.882 12:52:58 -- nvmf/common.sh@104 -- # continue 2 00:25:25.882 12:52:58 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:25.882 12:52:58 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:25.882 12:52:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:25.882 12:52:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:25.882 12:52:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:25.882 12:52:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:25.882 12:52:58 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:25.882 12:52:58 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:25.882 12:52:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:25.882 12:52:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:25.882 12:52:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:25.882 12:52:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:25.882 12:52:58 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:25.882 192.168.100.9' 00:25:25.882 12:52:58 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:25.882 192.168.100.9' 00:25:25.882 12:52:58 -- nvmf/common.sh@445 -- # head -n 1 00:25:25.882 12:52:58 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:25.882 12:52:58 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:25.882 192.168.100.9' 00:25:25.882 12:52:58 -- nvmf/common.sh@446 -- # tail -n +2 00:25:25.882 12:52:58 -- nvmf/common.sh@446 -- # head -n 1 00:25:25.882 12:52:58 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:25.882 12:52:58 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:25.882 12:52:58 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:25.882 12:52:58 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:25.882 12:52:58 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:25.882 12:52:58 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:25.882 12:52:58 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:25.882 12:52:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:25.882 12:52:58 -- common/autotest_common.sh@10 -- # set +x 00:25:25.882 12:52:58 -- host/identify.sh@19 -- # nvmfpid=634450 00:25:25.882 12:52:58 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:25.882 12:52:58 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:25.882 12:52:58 -- host/identify.sh@23 -- # waitforlisten 634450 00:25:25.882 12:52:58 -- common/autotest_common.sh@829 -- # '[' -z 634450 ']' 00:25:25.882 12:52:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.882 12:52:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.882 12:52:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.882 12:52:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.882 12:52:58 -- common/autotest_common.sh@10 -- # set +x 00:25:26.143 [2024-11-20 12:52:58.992789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:26.143 [2024-11-20 12:52:58.992858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.143 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.143 [2024-11-20 12:52:59.058263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:26.143 [2024-11-20 12:52:59.131436] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:26.143 [2024-11-20 12:52:59.131572] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.143 [2024-11-20 12:52:59.131582] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.143 [2024-11-20 12:52:59.131591] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.143 [2024-11-20 12:52:59.131737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.143 [2024-11-20 12:52:59.131858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.143 [2024-11-20 12:52:59.132029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.143 [2024-11-20 12:52:59.132029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:26.714 12:52:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:26.714 12:52:59 -- common/autotest_common.sh@862 -- # return 0 00:25:26.714 12:52:59 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:26.714 12:52:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.714 12:52:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.714 [2024-11-20 12:52:59.810055] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb477f0/0xb4bce0) succeed. 00:25:26.976 [2024-11-20 12:52:59.825678] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb48de0/0xb8d380) succeed. 00:25:26.976 12:52:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.976 12:52:59 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:26.976 12:52:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:26.976 12:52:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.976 12:52:59 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:26.976 12:52:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.976 12:52:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.976 Malloc0 00:25:26.976 12:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.976 12:53:00 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:26.976 12:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.976 12:53:00 -- common/autotest_common.sh@10 -- # set +x 00:25:26.976 12:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.976 12:53:00 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:26.976 12:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.976 12:53:00 -- common/autotest_common.sh@10 -- # set +x 00:25:26.976 12:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.976 12:53:00 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:26.976 12:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.976 12:53:00 -- common/autotest_common.sh@10 -- # set +x 00:25:26.976 [2024-11-20 12:53:00.038023] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:26.976 12:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.976 12:53:00 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:26.976 12:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.976 12:53:00 -- common/autotest_common.sh@10 -- # set +x 00:25:26.976 12:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.976 12:53:00 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:26.976 12:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.976 12:53:00 -- common/autotest_common.sh@10 -- # set +x 00:25:26.976 [2024-11-20 12:53:00.061652] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:26.976 [ 00:25:26.976 { 00:25:26.976 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:26.976 "subtype": "Discovery", 00:25:26.976 "listen_addresses": [ 00:25:26.976 { 00:25:26.976 "transport": "RDMA", 00:25:26.976 "trtype": "RDMA", 00:25:26.976 "adrfam": "IPv4", 00:25:26.976 "traddr": "192.168.100.8", 00:25:26.976 "trsvcid": "4420" 00:25:26.976 } 00:25:26.976 ], 00:25:26.976 "allow_any_host": true, 00:25:26.976 "hosts": [] 00:25:26.976 }, 00:25:26.976 { 00:25:26.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.976 "subtype": "NVMe", 00:25:26.976 "listen_addresses": [ 00:25:26.976 { 00:25:26.976 "transport": "RDMA", 00:25:26.976 "trtype": "RDMA", 00:25:26.976 "adrfam": "IPv4", 00:25:26.976 "traddr": "192.168.100.8", 00:25:26.976 "trsvcid": "4420" 00:25:26.976 } 00:25:26.976 ], 00:25:26.976 "allow_any_host": true, 00:25:26.976 "hosts": [], 00:25:26.976 "serial_number": "SPDK00000000000001", 00:25:26.976 "model_number": "SPDK bdev Controller", 00:25:26.976 "max_namespaces": 32, 00:25:26.976 "min_cntlid": 1, 00:25:26.976 "max_cntlid": 65519, 00:25:26.976 "namespaces": [ 00:25:26.976 { 00:25:26.976 "nsid": 1, 00:25:26.976 "bdev_name": "Malloc0", 00:25:26.976 "name": "Malloc0", 00:25:26.976 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:26.976 "eui64": "ABCDEF0123456789", 00:25:26.976 "uuid": "ca9afe25-2cc0-40b8-a3e9-8fbdc61640c3" 00:25:26.976 } 00:25:26.976 ] 00:25:26.976 } 00:25:26.976 ] 00:25:26.976 12:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.976 12:53:00 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:27.243 [2024-11-20 12:53:00.097993] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:27.243 [2024-11-20 12:53:00.098049] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634530 ] 00:25:27.243 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.243 [2024-11-20 12:53:00.156522] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:27.243 [2024-11-20 12:53:00.156603] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:27.243 [2024-11-20 12:53:00.156627] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:27.243 [2024-11-20 12:53:00.156631] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:27.243 [2024-11-20 12:53:00.156661] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:27.243 [2024-11-20 12:53:00.173864] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:27.243 [2024-11-20 12:53:00.195192] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:27.243 [2024-11-20 12:53:00.195202] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:27.243 [2024-11-20 12:53:00.195210] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195216] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195221] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195226] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195231] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195236] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195242] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195250] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195255] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195260] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195265] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195271] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195276] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195281] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195286] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195291] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195296] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195301] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195306] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195311] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195316] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195321] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195326] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195331] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195337] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195342] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195347] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195352] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195357] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195362] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195367] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195371] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:27.243 [2024-11-20 12:53:00.195376] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:27.243 [2024-11-20 12:53:00.195379] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:27.243 [2024-11-20 12:53:00.195396] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.195409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183b00 00:25:27.243 [2024-11-20 12:53:00.201988] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.243 [2024-11-20 12:53:00.201996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:27.243 [2024-11-20 12:53:00.202006] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.202013] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:27.243 [2024-11-20 12:53:00.202019] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:27.243 [2024-11-20 12:53:00.202024] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:27.243 [2024-11-20 12:53:00.202037] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.202044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.243 [2024-11-20 12:53:00.202072] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.243 [2024-11-20 12:53:00.202077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:27.243 [2024-11-20 12:53:00.202082] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:27.243 [2024-11-20 12:53:00.202087] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.202093] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:27.243 [2024-11-20 12:53:00.202100] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.202107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.243 [2024-11-20 12:53:00.202134] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.243 [2024-11-20 12:53:00.202139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:27.243 [2024-11-20 12:53:00.202145] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:27.243 [2024-11-20 12:53:00.202150] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.202156] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:27.243 [2024-11-20 12:53:00.202163] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.243 [2024-11-20 12:53:00.202170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.243 [2024-11-20 12:53:00.202187] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.243 [2024-11-20 12:53:00.202191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202197] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:27.244 [2024-11-20 12:53:00.202201] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202209] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.244 [2024-11-20 12:53:00.202242] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.244 [2024-11-20 12:53:00.202246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202253] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:27.244 [2024-11-20 12:53:00.202258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:27.244 [2024-11-20 12:53:00.202263] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202269] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:27.244 [2024-11-20 12:53:00.202374] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:27.244 [2024-11-20 12:53:00.202378] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:27.244 [2024-11-20 12:53:00.202387] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.244 [2024-11-20 12:53:00.202415] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.244 [2024-11-20 12:53:00.202420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202425] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:27.244 [2024-11-20 12:53:00.202430] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202438] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.244 [2024-11-20 12:53:00.202472] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.244 [2024-11-20 12:53:00.202476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202481] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:27.244 [2024-11-20 12:53:00.202486] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:27.244 [2024-11-20 12:53:00.202490] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202496] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:27.244 [2024-11-20 12:53:00.202503] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:27.244 [2024-11-20 12:53:00.202512] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183b00 00:25:27.244 [2024-11-20 12:53:00.202554] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.244 [2024-11-20 12:53:00.202559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202567] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:27.244 [2024-11-20 12:53:00.202572] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:27.244 [2024-11-20 12:53:00.202578] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:27.244 [2024-11-20 12:53:00.202583] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:27.244 [2024-11-20 12:53:00.202588] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:27.244 [2024-11-20 12:53:00.202593] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:27.244 [2024-11-20 12:53:00.202597] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202606] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:27.244 [2024-11-20 12:53:00.202613] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202620] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.244 [2024-11-20 12:53:00.202639] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.244 [2024-11-20 12:53:00.202644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202652] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.244 [2024-11-20 12:53:00.202664] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.244 [2024-11-20 12:53:00.202676] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.244 [2024-11-20 12:53:00.202688] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.244 [2024-11-20 12:53:00.202699] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:27.244 [2024-11-20 12:53:00.202703] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202713] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:27.244 [2024-11-20 12:53:00.202720] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.244 [2024-11-20 12:53:00.202750] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.244 [2024-11-20 12:53:00.202754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202760] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:27.244 [2024-11-20 12:53:00.202765] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:27.244 [2024-11-20 12:53:00.202771] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202780] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183b00 00:25:27.244 [2024-11-20 12:53:00.202812] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.244 [2024-11-20 12:53:00.202817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202823] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202831] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:27.244 [2024-11-20 12:53:00.202850] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183b00 00:25:27.244 [2024-11-20 12:53:00.202865] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.244 [2024-11-20 12:53:00.202894] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.244 [2024-11-20 12:53:00.202899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202909] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183b00 00:25:27.244 [2024-11-20 12:53:00.202921] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202927] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.244 [2024-11-20 12:53:00.202932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202937] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183b00 00:25:27.244 [2024-11-20 12:53:00.202950] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.244 [2024-11-20 12:53:00.202955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:27.244 [2024-11-20 12:53:00.202964] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183b00 00:25:27.245 [2024-11-20 12:53:00.202970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183b00 00:25:27.245 [2024-11-20 12:53:00.202975] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183b00 00:25:27.245 [2024-11-20 12:53:00.203004] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.245 [2024-11-20 12:53:00.203009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.245 [2024-11-20 12:53:00.203018] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183b00 00:25:27.245 ===================================================== 00:25:27.245 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:27.245 ===================================================== 00:25:27.245 Controller Capabilities/Features 00:25:27.245 ================================ 00:25:27.245 Vendor ID: 0000 00:25:27.245 Subsystem Vendor ID: 0000 00:25:27.245 Serial Number: .................... 00:25:27.245 Model Number: ........................................ 00:25:27.245 Firmware Version: 24.01.1 00:25:27.245 Recommended Arb Burst: 0 00:25:27.245 IEEE OUI Identifier: 00 00 00 00:25:27.245 Multi-path I/O 00:25:27.245 May have multiple subsystem ports: No 00:25:27.245 May have multiple controllers: No 00:25:27.245 Associated with SR-IOV VF: No 00:25:27.245 Max Data Transfer Size: 131072 00:25:27.245 Max Number of Namespaces: 0 00:25:27.245 Max Number of I/O Queues: 1024 00:25:27.245 NVMe Specification Version (VS): 1.3 00:25:27.245 NVMe Specification Version (Identify): 1.3 00:25:27.245 Maximum Queue Entries: 128 00:25:27.245 Contiguous Queues Required: Yes 00:25:27.245 Arbitration Mechanisms Supported 00:25:27.245 Weighted Round Robin: Not Supported 00:25:27.245 Vendor Specific: Not Supported 00:25:27.245 Reset Timeout: 15000 ms 00:25:27.245 Doorbell Stride: 4 bytes 00:25:27.245 NVM Subsystem Reset: Not Supported 00:25:27.245 Command Sets Supported 00:25:27.245 NVM Command Set: Supported 00:25:27.245 Boot Partition: Not Supported 00:25:27.245 Memory Page Size Minimum: 4096 bytes 00:25:27.245 Memory Page Size Maximum: 4096 bytes 00:25:27.245 Persistent Memory Region: Not Supported 00:25:27.245 Optional Asynchronous Events Supported 00:25:27.245 Namespace Attribute Notices: Not Supported 00:25:27.245 Firmware Activation Notices: Not Supported 00:25:27.245 ANA Change Notices: Not Supported 00:25:27.245 PLE Aggregate Log Change Notices: Not Supported 00:25:27.245 LBA Status Info Alert Notices: Not Supported 00:25:27.245 EGE Aggregate Log Change Notices: Not Supported 00:25:27.245 Normal NVM Subsystem Shutdown event: Not Supported 00:25:27.245 Zone Descriptor Change Notices: Not Supported 00:25:27.245 Discovery Log Change Notices: Supported 00:25:27.245 Controller Attributes 00:25:27.245 128-bit Host Identifier: Not Supported 00:25:27.245 Non-Operational Permissive Mode: Not Supported 00:25:27.245 NVM Sets: Not Supported 00:25:27.245 Read Recovery Levels: Not Supported 00:25:27.245 Endurance Groups: Not Supported 00:25:27.245 Predictable Latency Mode: Not Supported 00:25:27.245 Traffic Based Keep ALive: Not Supported 00:25:27.245 Namespace Granularity: Not Supported 00:25:27.245 SQ Associations: Not Supported 00:25:27.245 UUID List: Not Supported 00:25:27.245 Multi-Domain Subsystem: Not Supported 00:25:27.245 Fixed Capacity Management: Not Supported 00:25:27.245 Variable Capacity Management: Not Supported 00:25:27.245 Delete Endurance Group: Not Supported 00:25:27.245 Delete NVM Set: Not Supported 00:25:27.245 Extended LBA Formats Supported: Not Supported 00:25:27.245 Flexible Data Placement Supported: Not Supported 00:25:27.245 00:25:27.245 Controller Memory Buffer Support 00:25:27.245 ================================ 00:25:27.245 Supported: No 00:25:27.245 00:25:27.245 Persistent Memory Region Support 00:25:27.245 ================================ 00:25:27.245 Supported: No 00:25:27.245 00:25:27.245 Admin Command Set Attributes 00:25:27.245 ============================ 00:25:27.245 Security Send/Receive: Not Supported 00:25:27.245 Format NVM: Not Supported 00:25:27.245 Firmware Activate/Download: Not Supported 00:25:27.245 Namespace Management: Not Supported 00:25:27.245 Device Self-Test: Not Supported 00:25:27.245 Directives: Not Supported 00:25:27.245 NVMe-MI: Not Supported 00:25:27.245 Virtualization Management: Not Supported 00:25:27.245 Doorbell Buffer Config: Not Supported 00:25:27.245 Get LBA Status Capability: Not Supported 00:25:27.245 Command & Feature Lockdown Capability: Not Supported 00:25:27.245 Abort Command Limit: 1 00:25:27.245 Async Event Request Limit: 4 00:25:27.245 Number of Firmware Slots: N/A 00:25:27.245 Firmware Slot 1 Read-Only: N/A 00:25:27.245 Firmware Activation Without Reset: N/A 00:25:27.245 Multiple Update Detection Support: N/A 00:25:27.245 Firmware Update Granularity: No Information Provided 00:25:27.245 Per-Namespace SMART Log: No 00:25:27.245 Asymmetric Namespace Access Log Page: Not Supported 00:25:27.245 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:27.245 Command Effects Log Page: Not Supported 00:25:27.245 Get Log Page Extended Data: Supported 00:25:27.245 Telemetry Log Pages: Not Supported 00:25:27.245 Persistent Event Log Pages: Not Supported 00:25:27.245 Supported Log Pages Log Page: May Support 00:25:27.245 Commands Supported & Effects Log Page: Not Supported 00:25:27.245 Feature Identifiers & Effects Log Page:May Support 00:25:27.245 NVMe-MI Commands & Effects Log Page: May Support 00:25:27.245 Data Area 4 for Telemetry Log: Not Supported 00:25:27.245 Error Log Page Entries Supported: 128 00:25:27.245 Keep Alive: Not Supported 00:25:27.245 00:25:27.245 NVM Command Set Attributes 00:25:27.245 ========================== 00:25:27.245 Submission Queue Entry Size 00:25:27.245 Max: 1 00:25:27.245 Min: 1 00:25:27.245 Completion Queue Entry Size 00:25:27.245 Max: 1 00:25:27.245 Min: 1 00:25:27.245 Number of Namespaces: 0 00:25:27.245 Compare Command: Not Supported 00:25:27.245 Write Uncorrectable Command: Not Supported 00:25:27.245 Dataset Management Command: Not Supported 00:25:27.245 Write Zeroes Command: Not Supported 00:25:27.245 Set Features Save Field: Not Supported 00:25:27.245 Reservations: Not Supported 00:25:27.245 Timestamp: Not Supported 00:25:27.245 Copy: Not Supported 00:25:27.245 Volatile Write Cache: Not Present 00:25:27.245 Atomic Write Unit (Normal): 1 00:25:27.245 Atomic Write Unit (PFail): 1 00:25:27.245 Atomic Compare & Write Unit: 1 00:25:27.245 Fused Compare & Write: Supported 00:25:27.245 Scatter-Gather List 00:25:27.245 SGL Command Set: Supported 00:25:27.245 SGL Keyed: Supported 00:25:27.245 SGL Bit Bucket Descriptor: Not Supported 00:25:27.245 SGL Metadata Pointer: Not Supported 00:25:27.245 Oversized SGL: Not Supported 00:25:27.245 SGL Metadata Address: Not Supported 00:25:27.245 SGL Offset: Supported 00:25:27.245 Transport SGL Data Block: Not Supported 00:25:27.245 Replay Protected Memory Block: Not Supported 00:25:27.245 00:25:27.245 Firmware Slot Information 00:25:27.245 ========================= 00:25:27.245 Active slot: 0 00:25:27.245 00:25:27.245 00:25:27.245 Error Log 00:25:27.245 ========= 00:25:27.245 00:25:27.245 Active Namespaces 00:25:27.245 ================= 00:25:27.245 Discovery Log Page 00:25:27.245 ================== 00:25:27.245 Generation Counter: 2 00:25:27.245 Number of Records: 2 00:25:27.245 Record Format: 0 00:25:27.245 00:25:27.245 Discovery Log Entry 0 00:25:27.245 ---------------------- 00:25:27.245 Transport Type: 1 (RDMA) 00:25:27.245 Address Family: 1 (IPv4) 00:25:27.245 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:27.245 Entry Flags: 00:25:27.245 Duplicate Returned Information: 1 00:25:27.245 Explicit Persistent Connection Support for Discovery: 1 00:25:27.245 Transport Requirements: 00:25:27.245 Secure Channel: Not Required 00:25:27.245 Port ID: 0 (0x0000) 00:25:27.245 Controller ID: 65535 (0xffff) 00:25:27.245 Admin Max SQ Size: 128 00:25:27.245 Transport Service Identifier: 4420 00:25:27.245 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:27.245 Transport Address: 192.168.100.8 00:25:27.245 Transport Specific Address Subtype - RDMA 00:25:27.245 RDMA QP Service Type: 1 (Reliable Connected) 00:25:27.245 RDMA Provider Type: 1 (No provider specified) 00:25:27.245 RDMA CM Service: 1 (RDMA_CM) 00:25:27.245 Discovery Log Entry 1 00:25:27.245 ---------------------- 00:25:27.245 Transport Type: 1 (RDMA) 00:25:27.246 Address Family: 1 (IPv4) 00:25:27.246 Subsystem Type: 2 (NVM Subsystem) 00:25:27.246 Entry Flags: 00:25:27.246 Duplicate Returned Information: 0 00:25:27.246 Explicit Persistent Connection Support for Discovery: 0 00:25:27.246 Transport Requirements: 00:25:27.246 Secure Channel: Not Required 00:25:27.246 Port ID: 0 (0x0000) 00:25:27.246 Controller ID: 65535 (0xffff) 00:25:27.246 Admin Max SQ Size: [2024-11-20 12:53:00.203091] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:27.246 [2024-11-20 12:53:00.203102] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12460 doesn't match qid 00:25:27.246 [2024-11-20 12:53:00.203116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32569 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203121] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12460 doesn't match qid 00:25:27.246 [2024-11-20 12:53:00.203128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32569 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203133] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12460 doesn't match qid 00:25:27.246 [2024-11-20 12:53:00.203140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32569 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203145] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12460 doesn't match qid 00:25:27.246 [2024-11-20 12:53:00.203151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32569 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203160] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203184] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203197] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203209] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203231] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203241] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:27.246 [2024-11-20 12:53:00.203245] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:27.246 [2024-11-20 12:53:00.203250] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203258] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203285] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203296] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203305] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203331] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203344] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203353] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203384] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203394] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203403] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203428] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203439] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203447] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203471] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203481] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203490] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203518] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203528] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203537] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203566] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203576] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203585] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203612] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203624] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203632] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203665] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203675] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203684] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203713] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203722] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203731] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203755] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203765] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203774] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203799] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.246 [2024-11-20 12:53:00.203803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:27.246 [2024-11-20 12:53:00.203809] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203817] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.246 [2024-11-20 12:53:00.203824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.246 [2024-11-20 12:53:00.203848] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.203853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.203858] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.203867] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.203874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.203898] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.203904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.203910] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.203918] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.203925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.203945] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.203949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.203955] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.203963] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.203970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204003] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204013] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204022] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204053] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204063] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204071] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204098] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204107] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204116] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204147] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204157] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204165] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204196] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204206] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204215] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204239] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204249] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204258] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204286] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204296] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204305] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204331] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204341] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204350] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204374] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204384] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204393] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204420] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204429] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204438] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204464] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204474] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204483] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.247 [2024-11-20 12:53:00.204511] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.247 [2024-11-20 12:53:00.204516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:27.247 [2024-11-20 12:53:00.204521] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204530] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.247 [2024-11-20 12:53:00.204537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.204554] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.204559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.204564] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204573] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.204599] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.204604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.204609] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204618] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.204646] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.204651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.204656] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204665] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.204698] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.204703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.204708] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204716] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.204744] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.204749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.204754] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204763] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.204787] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.204792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.204797] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204806] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.204830] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.204835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.204840] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204849] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.204878] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.204882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.204888] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204896] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.204929] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.204934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.204939] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204948] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.204974] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.204979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.204988] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.204997] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.205027] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.205032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.205037] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205045] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.205076] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.205081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.205086] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205095] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.205128] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.205133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.205138] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205146] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.205175] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.205180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.205185] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205194] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.205223] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.205227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.205232] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205241] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.205268] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.205272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.205277] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205286] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.205319] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.205324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.205329] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205338] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.248 [2024-11-20 12:53:00.205366] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.248 [2024-11-20 12:53:00.205371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:27.248 [2024-11-20 12:53:00.205376] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183b00 00:25:27.248 [2024-11-20 12:53:00.205385] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205416] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205425] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205434] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205458] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205468] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205477] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205505] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205515] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205524] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205552] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205562] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205572] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205603] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205613] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205622] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205646] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205656] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205665] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205691] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205701] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205710] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205738] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205748] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205757] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205785] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205795] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205804] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205828] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205838] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205848] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205875] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205884] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205893] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205924] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.205934] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205942] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.205949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.205971] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.205975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.209986] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.209997] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.210004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.249 [2024-11-20 12:53:00.210022] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.249 [2024-11-20 12:53:00.210026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:25:27.249 [2024-11-20 12:53:00.210032] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183b00 00:25:27.249 [2024-11-20 12:53:00.210038] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:25:27.249 128 00:25:27.249 Transport Service Identifier: 4420 00:25:27.249 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:27.249 Transport Address: 192.168.100.8 00:25:27.249 Transport Specific Address Subtype - RDMA 00:25:27.249 RDMA QP Service Type: 1 (Reliable Connected) 00:25:27.249 RDMA Provider Type: 1 (No provider specified) 00:25:27.249 RDMA CM Service: 1 (RDMA_CM) 00:25:27.249 12:53:00 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:27.249 [2024-11-20 12:53:00.291214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:27.249 [2024-11-20 12:53:00.291255] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634603 ] 00:25:27.249 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.515 [2024-11-20 12:53:00.346256] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:27.515 [2024-11-20 12:53:00.346329] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:27.515 [2024-11-20 12:53:00.346345] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:27.515 [2024-11-20 12:53:00.346349] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:27.515 [2024-11-20 12:53:00.346377] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:27.515 [2024-11-20 12:53:00.359741] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:27.515 [2024-11-20 12:53:00.377339] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:27.515 [2024-11-20 12:53:00.377349] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:27.515 [2024-11-20 12:53:00.377357] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377363] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377368] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377373] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377378] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377383] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377388] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377393] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377398] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377403] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377408] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377413] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377418] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377424] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377429] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377434] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377439] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377444] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377449] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377454] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377459] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377464] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377474] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377481] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377486] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377492] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183b00 00:25:27.515 [2024-11-20 12:53:00.377497] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.377502] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.377507] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.377512] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.377518] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.377522] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:27.516 [2024-11-20 12:53:00.377527] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:27.516 [2024-11-20 12:53:00.377530] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:27.516 [2024-11-20 12:53:00.377546] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.377557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183b00 00:25:27.516 [2024-11-20 12:53:00.383989] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.516 [2024-11-20 12:53:00.383999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:27.516 [2024-11-20 12:53:00.384005] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384011] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:27.516 [2024-11-20 12:53:00.384018] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:27.516 [2024-11-20 12:53:00.384023] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:27.516 [2024-11-20 12:53:00.384033] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.516 [2024-11-20 12:53:00.384057] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.516 [2024-11-20 12:53:00.384062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:27.516 [2024-11-20 12:53:00.384068] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:27.516 [2024-11-20 12:53:00.384072] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384078] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:27.516 [2024-11-20 12:53:00.384085] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.516 [2024-11-20 12:53:00.384105] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.516 [2024-11-20 12:53:00.384110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:27.516 [2024-11-20 12:53:00.384118] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:27.516 [2024-11-20 12:53:00.384123] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384130] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:27.516 [2024-11-20 12:53:00.384136] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.516 [2024-11-20 12:53:00.384159] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.516 [2024-11-20 12:53:00.384164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:27.516 [2024-11-20 12:53:00.384169] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:27.516 [2024-11-20 12:53:00.384174] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384182] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.516 [2024-11-20 12:53:00.384202] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.516 [2024-11-20 12:53:00.384207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:27.516 [2024-11-20 12:53:00.384212] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:27.516 [2024-11-20 12:53:00.384217] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:27.516 [2024-11-20 12:53:00.384221] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384227] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:27.516 [2024-11-20 12:53:00.384332] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:27.516 [2024-11-20 12:53:00.384336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:27.516 [2024-11-20 12:53:00.384344] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.516 [2024-11-20 12:53:00.384367] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.516 [2024-11-20 12:53:00.384372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:27.516 [2024-11-20 12:53:00.384377] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:27.516 [2024-11-20 12:53:00.384382] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384390] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.516 [2024-11-20 12:53:00.384410] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.516 [2024-11-20 12:53:00.384414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:27.516 [2024-11-20 12:53:00.384419] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:27.516 [2024-11-20 12:53:00.384424] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:27.516 [2024-11-20 12:53:00.384429] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384435] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:27.516 [2024-11-20 12:53:00.384448] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:27.516 [2024-11-20 12:53:00.384456] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183b00 00:25:27.516 [2024-11-20 12:53:00.384494] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.516 [2024-11-20 12:53:00.384499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:27.516 [2024-11-20 12:53:00.384506] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:27.516 [2024-11-20 12:53:00.384511] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:27.516 [2024-11-20 12:53:00.384516] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:27.516 [2024-11-20 12:53:00.384520] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:27.516 [2024-11-20 12:53:00.384524] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:27.516 [2024-11-20 12:53:00.384529] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:27.516 [2024-11-20 12:53:00.384534] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384542] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:27.516 [2024-11-20 12:53:00.384549] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.516 [2024-11-20 12:53:00.384572] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.516 [2024-11-20 12:53:00.384577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:27.516 [2024-11-20 12:53:00.384584] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.516 [2024-11-20 12:53:00.384596] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.516 [2024-11-20 12:53:00.384608] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.516 [2024-11-20 12:53:00.384622] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183b00 00:25:27.516 [2024-11-20 12:53:00.384628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.516 [2024-11-20 12:53:00.384632] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:27.516 [2024-11-20 12:53:00.384637] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.384647] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.384653] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.384660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.517 [2024-11-20 12:53:00.384675] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.384680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.384685] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:27.517 [2024-11-20 12:53:00.384690] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.384695] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.384701] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.384709] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.384716] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.384723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.517 [2024-11-20 12:53:00.384739] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.384743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.384805] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.384810] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.384817] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.384825] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.384831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183b00 00:25:27.517 [2024-11-20 12:53:00.384853] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.384858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.384870] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:27.517 [2024-11-20 12:53:00.384880] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.384885] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.384892] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.384900] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.384908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183b00 00:25:27.517 [2024-11-20 12:53:00.384935] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.384940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.384950] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.384956] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.384962] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.384970] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.384976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183b00 00:25:27.517 [2024-11-20 12:53:00.385000] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.385005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.385013] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.385018] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385024] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.385032] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.385038] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.385043] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.385048] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:27.517 [2024-11-20 12:53:00.385052] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:27.517 [2024-11-20 12:53:00.385057] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:27.517 [2024-11-20 12:53:00.385070] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.517 [2024-11-20 12:53:00.385083] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.517 [2024-11-20 12:53:00.385100] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.385105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.385110] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385116] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.385120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.385125] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385133] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.517 [2024-11-20 12:53:00.385157] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.385162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.385167] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385175] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.517 [2024-11-20 12:53:00.385197] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.385201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.385206] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385214] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.517 [2024-11-20 12:53:00.385239] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.385244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.385249] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385259] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183b00 00:25:27.517 [2024-11-20 12:53:00.385274] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183b00 00:25:27.517 [2024-11-20 12:53:00.385289] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183b00 00:25:27.517 [2024-11-20 12:53:00.385306] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183b00 00:25:27.517 [2024-11-20 12:53:00.385320] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.385325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.385338] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385343] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.517 [2024-11-20 12:53:00.385348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:27.517 [2024-11-20 12:53:00.385356] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183b00 00:25:27.517 [2024-11-20 12:53:00.385361] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.518 [2024-11-20 12:53:00.385366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:27.518 [2024-11-20 12:53:00.385372] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183b00 00:25:27.518 [2024-11-20 12:53:00.385377] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.518 [2024-11-20 12:53:00.385381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:27.518 [2024-11-20 12:53:00.385390] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183b00 00:25:27.518 ===================================================== 00:25:27.518 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:27.518 ===================================================== 00:25:27.518 Controller Capabilities/Features 00:25:27.518 ================================ 00:25:27.518 Vendor ID: 8086 00:25:27.518 Subsystem Vendor ID: 8086 00:25:27.518 Serial Number: SPDK00000000000001 00:25:27.518 Model Number: SPDK bdev Controller 00:25:27.518 Firmware Version: 24.01.1 00:25:27.518 Recommended Arb Burst: 6 00:25:27.518 IEEE OUI Identifier: e4 d2 5c 00:25:27.518 Multi-path I/O 00:25:27.518 May have multiple subsystem ports: Yes 00:25:27.518 May have multiple controllers: Yes 00:25:27.518 Associated with SR-IOV VF: No 00:25:27.518 Max Data Transfer Size: 131072 00:25:27.518 Max Number of Namespaces: 32 00:25:27.518 Max Number of I/O Queues: 127 00:25:27.518 NVMe Specification Version (VS): 1.3 00:25:27.518 NVMe Specification Version (Identify): 1.3 00:25:27.518 Maximum Queue Entries: 128 00:25:27.518 Contiguous Queues Required: Yes 00:25:27.518 Arbitration Mechanisms Supported 00:25:27.518 Weighted Round Robin: Not Supported 00:25:27.518 Vendor Specific: Not Supported 00:25:27.518 Reset Timeout: 15000 ms 00:25:27.518 Doorbell Stride: 4 bytes 00:25:27.518 NVM Subsystem Reset: Not Supported 00:25:27.518 Command Sets Supported 00:25:27.518 NVM Command Set: Supported 00:25:27.518 Boot Partition: Not Supported 00:25:27.518 Memory Page Size Minimum: 4096 bytes 00:25:27.518 Memory Page Size Maximum: 4096 bytes 00:25:27.518 Persistent Memory Region: Not Supported 00:25:27.518 Optional Asynchronous Events Supported 00:25:27.518 Namespace Attribute Notices: Supported 00:25:27.518 Firmware Activation Notices: Not Supported 00:25:27.518 ANA Change Notices: Not Supported 00:25:27.518 PLE Aggregate Log Change Notices: Not Supported 00:25:27.518 LBA Status Info Alert Notices: Not Supported 00:25:27.518 EGE Aggregate Log Change Notices: Not Supported 00:25:27.518 Normal NVM Subsystem Shutdown event: Not Supported 00:25:27.518 Zone Descriptor Change Notices: Not Supported 00:25:27.518 Discovery Log Change Notices: Not Supported 00:25:27.518 Controller Attributes 00:25:27.518 128-bit Host Identifier: Supported 00:25:27.518 Non-Operational Permissive Mode: Not Supported 00:25:27.518 NVM Sets: Not Supported 00:25:27.518 Read Recovery Levels: Not Supported 00:25:27.518 Endurance Groups: Not Supported 00:25:27.518 Predictable Latency Mode: Not Supported 00:25:27.518 Traffic Based Keep ALive: Not Supported 00:25:27.518 Namespace Granularity: Not Supported 00:25:27.518 SQ Associations: Not Supported 00:25:27.518 UUID List: Not Supported 00:25:27.518 Multi-Domain Subsystem: Not Supported 00:25:27.518 Fixed Capacity Management: Not Supported 00:25:27.518 Variable Capacity Management: Not Supported 00:25:27.518 Delete Endurance Group: Not Supported 00:25:27.518 Delete NVM Set: Not Supported 00:25:27.518 Extended LBA Formats Supported: Not Supported 00:25:27.518 Flexible Data Placement Supported: Not Supported 00:25:27.518 00:25:27.518 Controller Memory Buffer Support 00:25:27.518 ================================ 00:25:27.518 Supported: No 00:25:27.518 00:25:27.518 Persistent Memory Region Support 00:25:27.518 ================================ 00:25:27.518 Supported: No 00:25:27.518 00:25:27.518 Admin Command Set Attributes 00:25:27.518 ============================ 00:25:27.518 Security Send/Receive: Not Supported 00:25:27.518 Format NVM: Not Supported 00:25:27.518 Firmware Activate/Download: Not Supported 00:25:27.518 Namespace Management: Not Supported 00:25:27.518 Device Self-Test: Not Supported 00:25:27.518 Directives: Not Supported 00:25:27.518 NVMe-MI: Not Supported 00:25:27.518 Virtualization Management: Not Supported 00:25:27.518 Doorbell Buffer Config: Not Supported 00:25:27.518 Get LBA Status Capability: Not Supported 00:25:27.518 Command & Feature Lockdown Capability: Not Supported 00:25:27.518 Abort Command Limit: 4 00:25:27.518 Async Event Request Limit: 4 00:25:27.518 Number of Firmware Slots: N/A 00:25:27.518 Firmware Slot 1 Read-Only: N/A 00:25:27.518 Firmware Activation Without Reset: N/A 00:25:27.518 Multiple Update Detection Support: N/A 00:25:27.518 Firmware Update Granularity: No Information Provided 00:25:27.518 Per-Namespace SMART Log: No 00:25:27.518 Asymmetric Namespace Access Log Page: Not Supported 00:25:27.518 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:27.518 Command Effects Log Page: Supported 00:25:27.518 Get Log Page Extended Data: Supported 00:25:27.518 Telemetry Log Pages: Not Supported 00:25:27.518 Persistent Event Log Pages: Not Supported 00:25:27.518 Supported Log Pages Log Page: May Support 00:25:27.518 Commands Supported & Effects Log Page: Not Supported 00:25:27.518 Feature Identifiers & Effects Log Page:May Support 00:25:27.518 NVMe-MI Commands & Effects Log Page: May Support 00:25:27.518 Data Area 4 for Telemetry Log: Not Supported 00:25:27.518 Error Log Page Entries Supported: 128 00:25:27.518 Keep Alive: Supported 00:25:27.518 Keep Alive Granularity: 10000 ms 00:25:27.518 00:25:27.518 NVM Command Set Attributes 00:25:27.518 ========================== 00:25:27.518 Submission Queue Entry Size 00:25:27.518 Max: 64 00:25:27.518 Min: 64 00:25:27.518 Completion Queue Entry Size 00:25:27.518 Max: 16 00:25:27.518 Min: 16 00:25:27.518 Number of Namespaces: 32 00:25:27.518 Compare Command: Supported 00:25:27.518 Write Uncorrectable Command: Not Supported 00:25:27.518 Dataset Management Command: Supported 00:25:27.518 Write Zeroes Command: Supported 00:25:27.518 Set Features Save Field: Not Supported 00:25:27.518 Reservations: Supported 00:25:27.518 Timestamp: Not Supported 00:25:27.518 Copy: Supported 00:25:27.518 Volatile Write Cache: Present 00:25:27.518 Atomic Write Unit (Normal): 1 00:25:27.518 Atomic Write Unit (PFail): 1 00:25:27.518 Atomic Compare & Write Unit: 1 00:25:27.518 Fused Compare & Write: Supported 00:25:27.518 Scatter-Gather List 00:25:27.518 SGL Command Set: Supported 00:25:27.518 SGL Keyed: Supported 00:25:27.518 SGL Bit Bucket Descriptor: Not Supported 00:25:27.518 SGL Metadata Pointer: Not Supported 00:25:27.518 Oversized SGL: Not Supported 00:25:27.518 SGL Metadata Address: Not Supported 00:25:27.518 SGL Offset: Supported 00:25:27.518 Transport SGL Data Block: Not Supported 00:25:27.518 Replay Protected Memory Block: Not Supported 00:25:27.518 00:25:27.518 Firmware Slot Information 00:25:27.518 ========================= 00:25:27.518 Active slot: 1 00:25:27.518 Slot 1 Firmware Revision: 24.01.1 00:25:27.518 00:25:27.518 00:25:27.518 Commands Supported and Effects 00:25:27.518 ============================== 00:25:27.518 Admin Commands 00:25:27.518 -------------- 00:25:27.518 Get Log Page (02h): Supported 00:25:27.518 Identify (06h): Supported 00:25:27.518 Abort (08h): Supported 00:25:27.518 Set Features (09h): Supported 00:25:27.518 Get Features (0Ah): Supported 00:25:27.518 Asynchronous Event Request (0Ch): Supported 00:25:27.518 Keep Alive (18h): Supported 00:25:27.518 I/O Commands 00:25:27.518 ------------ 00:25:27.518 Flush (00h): Supported LBA-Change 00:25:27.518 Write (01h): Supported LBA-Change 00:25:27.518 Read (02h): Supported 00:25:27.518 Compare (05h): Supported 00:25:27.518 Write Zeroes (08h): Supported LBA-Change 00:25:27.518 Dataset Management (09h): Supported LBA-Change 00:25:27.518 Copy (19h): Supported LBA-Change 00:25:27.518 Unknown (79h): Supported LBA-Change 00:25:27.518 Unknown (7Ah): Supported 00:25:27.518 00:25:27.518 Error Log 00:25:27.518 ========= 00:25:27.518 00:25:27.518 Arbitration 00:25:27.518 =========== 00:25:27.518 Arbitration Burst: 1 00:25:27.518 00:25:27.518 Power Management 00:25:27.518 ================ 00:25:27.518 Number of Power States: 1 00:25:27.518 Current Power State: Power State #0 00:25:27.518 Power State #0: 00:25:27.518 Max Power: 0.00 W 00:25:27.518 Non-Operational State: Operational 00:25:27.518 Entry Latency: Not Reported 00:25:27.518 Exit Latency: Not Reported 00:25:27.518 Relative Read Throughput: 0 00:25:27.518 Relative Read Latency: 0 00:25:27.518 Relative Write Throughput: 0 00:25:27.518 Relative Write Latency: 0 00:25:27.518 Idle Power: Not Reported 00:25:27.518 Active Power: Not Reported 00:25:27.519 Non-Operational Permissive Mode: Not Supported 00:25:27.519 00:25:27.519 Health Information 00:25:27.519 ================== 00:25:27.519 Critical Warnings: 00:25:27.519 Available Spare Space: OK 00:25:27.519 Temperature: OK 00:25:27.519 Device Reliability: OK 00:25:27.519 Read Only: No 00:25:27.519 Volatile Memory Backup: OK 00:25:27.519 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:27.519 Temperature Threshol[2024-11-20 12:53:00.385483] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.385512] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.385517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385522] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385546] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:27.519 [2024-11-20 12:53:00.385555] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52571 doesn't match qid 00:25:27.519 [2024-11-20 12:53:00.385571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32637 cdw0:5 sqhd:2e28 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385580] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52571 doesn't match qid 00:25:27.519 [2024-11-20 12:53:00.385589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32637 cdw0:5 sqhd:2e28 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385596] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52571 doesn't match qid 00:25:27.519 [2024-11-20 12:53:00.385603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32637 cdw0:5 sqhd:2e28 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385610] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52571 doesn't match qid 00:25:27.519 [2024-11-20 12:53:00.385619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32637 cdw0:5 sqhd:2e28 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385629] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.385655] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.385660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385668] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.385681] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385697] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.385702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385707] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:27.519 [2024-11-20 12:53:00.385712] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:27.519 [2024-11-20 12:53:00.385717] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385725] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.385747] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.385752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385758] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385766] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.385787] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.385792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385797] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385806] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.385827] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.385832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385838] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385847] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.385870] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.385876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385882] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385890] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.385911] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.385916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385922] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385930] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.385955] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.385960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.385966] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385974] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.385986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.385998] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.386004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.386009] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.386018] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.386024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.386040] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.386045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.386051] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.386059] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.386066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.386080] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.386084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.386090] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.386098] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.386105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.386119] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.386125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.386131] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.386139] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.386146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.386162] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.386166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:27.519 [2024-11-20 12:53:00.386172] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.386180] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.519 [2024-11-20 12:53:00.386187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.519 [2024-11-20 12:53:00.386202] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.519 [2024-11-20 12:53:00.386206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386212] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386221] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386244] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386254] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386263] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386288] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386298] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386306] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386327] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386337] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386346] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386368] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386379] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386389] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386412] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386423] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386431] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386454] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386464] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386473] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386502] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386512] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386522] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386543] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386553] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386563] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386590] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386599] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386609] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386631] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386642] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386651] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386675] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386687] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386697] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386719] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386729] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386737] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386762] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386771] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386780] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386802] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386812] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386821] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.520 [2024-11-20 12:53:00.386845] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.520 [2024-11-20 12:53:00.386850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:27.520 [2024-11-20 12:53:00.386855] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183b00 00:25:27.520 [2024-11-20 12:53:00.386863] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.386872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.386887] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.386892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.386897] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.386906] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.386912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.386926] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.386930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.386936] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.386944] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.386951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.386964] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.386969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.386974] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.386986] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.386994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387011] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387021] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387030] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387050] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387060] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387068] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387089] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387099] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387107] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387127] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387137] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387145] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387165] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387175] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387184] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387208] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387218] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387226] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387247] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387257] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387265] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387287] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387297] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387306] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387326] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387336] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387345] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387366] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387376] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387384] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387404] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387414] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387423] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387441] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387451] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387459] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387486] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387495] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387504] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387526] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387536] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387544] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.521 [2024-11-20 12:53:00.387567] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.521 [2024-11-20 12:53:00.387571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:27.521 [2024-11-20 12:53:00.387576] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183b00 00:25:27.521 [2024-11-20 12:53:00.387586] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.387606] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.387611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.387616] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387625] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.387647] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.387651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.387657] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387665] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.387685] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.387690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.387695] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387704] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.387729] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.387733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.387738] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387747] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.387769] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.387774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.387779] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387787] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.387812] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.387816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.387823] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387832] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.387858] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.387863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.387868] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387876] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.387897] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.387901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.387906] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387915] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.387941] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.387946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.387951] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387960] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.387966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.387980] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.391991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.391997] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.392006] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.392013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:27.522 [2024-11-20 12:53:00.392028] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:27.522 [2024-11-20 12:53:00.392033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0012 p:0 m:0 dnr:0 00:25:27.522 [2024-11-20 12:53:00.392038] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183b00 00:25:27.522 [2024-11-20 12:53:00.392044] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:25:27.522 d: 0 Kelvin (-273 Celsius) 00:25:27.522 Available Spare: 0% 00:25:27.522 Available Spare Threshold: 0% 00:25:27.522 Life Percentage Used: 0% 00:25:27.522 Data Units Read: 0 00:25:27.522 Data Units Written: 0 00:25:27.522 Host Read Commands: 0 00:25:27.522 Host Write Commands: 0 00:25:27.522 Controller Busy Time: 0 minutes 00:25:27.522 Power Cycles: 0 00:25:27.522 Power On Hours: 0 hours 00:25:27.522 Unsafe Shutdowns: 0 00:25:27.522 Unrecoverable Media Errors: 0 00:25:27.522 Lifetime Error Log Entries: 0 00:25:27.522 Warning Temperature Time: 0 minutes 00:25:27.522 Critical Temperature Time: 0 minutes 00:25:27.522 00:25:27.522 Number of Queues 00:25:27.522 ================ 00:25:27.522 Number of I/O Submission Queues: 127 00:25:27.522 Number of I/O Completion Queues: 127 00:25:27.522 00:25:27.522 Active Namespaces 00:25:27.522 ================= 00:25:27.522 Namespace ID:1 00:25:27.522 Error Recovery Timeout: Unlimited 00:25:27.522 Command Set Identifier: NVM (00h) 00:25:27.522 Deallocate: Supported 00:25:27.522 Deallocated/Unwritten Error: Not Supported 00:25:27.522 Deallocated Read Value: Unknown 00:25:27.522 Deallocate in Write Zeroes: Not Supported 00:25:27.522 Deallocated Guard Field: 0xFFFF 00:25:27.522 Flush: Supported 00:25:27.522 Reservation: Supported 00:25:27.522 Namespace Sharing Capabilities: Multiple Controllers 00:25:27.522 Size (in LBAs): 131072 (0GiB) 00:25:27.522 Capacity (in LBAs): 131072 (0GiB) 00:25:27.522 Utilization (in LBAs): 131072 (0GiB) 00:25:27.522 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:27.522 EUI64: ABCDEF0123456789 00:25:27.522 UUID: ca9afe25-2cc0-40b8-a3e9-8fbdc61640c3 00:25:27.522 Thin Provisioning: Not Supported 00:25:27.522 Per-NS Atomic Units: Yes 00:25:27.522 Atomic Boundary Size (Normal): 0 00:25:27.522 Atomic Boundary Size (PFail): 0 00:25:27.522 Atomic Boundary Offset: 0 00:25:27.522 Maximum Single Source Range Length: 65535 00:25:27.522 Maximum Copy Length: 65535 00:25:27.522 Maximum Source Range Count: 1 00:25:27.522 NGUID/EUI64 Never Reused: No 00:25:27.522 Namespace Write Protected: No 00:25:27.522 Number of LBA Formats: 1 00:25:27.522 Current LBA Format: LBA Format #00 00:25:27.522 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:27.522 00:25:27.522 12:53:00 -- host/identify.sh@51 -- # sync 00:25:27.522 12:53:00 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.522 12:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.522 12:53:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.522 12:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.522 12:53:00 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:27.522 12:53:00 -- host/identify.sh@56 -- # nvmftestfini 00:25:27.522 12:53:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:27.522 12:53:00 -- nvmf/common.sh@116 -- # sync 00:25:27.522 12:53:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:27.522 12:53:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:27.522 12:53:00 -- nvmf/common.sh@119 -- # set +e 00:25:27.522 12:53:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:27.522 12:53:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:27.522 rmmod nvme_rdma 00:25:27.522 rmmod nvme_fabrics 00:25:27.522 12:53:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:27.522 12:53:00 -- nvmf/common.sh@123 -- # set -e 00:25:27.522 12:53:00 -- nvmf/common.sh@124 -- # return 0 00:25:27.522 12:53:00 -- nvmf/common.sh@477 -- # '[' -n 634450 ']' 00:25:27.522 12:53:00 -- nvmf/common.sh@478 -- # killprocess 634450 00:25:27.522 12:53:00 -- common/autotest_common.sh@936 -- # '[' -z 634450 ']' 00:25:27.522 12:53:00 -- common/autotest_common.sh@940 -- # kill -0 634450 00:25:27.523 12:53:00 -- common/autotest_common.sh@941 -- # uname 00:25:27.523 12:53:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:27.523 12:53:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 634450 00:25:27.523 12:53:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:27.523 12:53:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:27.523 12:53:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 634450' 00:25:27.523 killing process with pid 634450 00:25:27.523 12:53:00 -- common/autotest_common.sh@955 -- # kill 634450 00:25:27.523 [2024-11-20 12:53:00.598567] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:27.523 12:53:00 -- common/autotest_common.sh@960 -- # wait 634450 00:25:27.783 12:53:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:27.783 12:53:00 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:27.783 00:25:27.783 real 0m8.833s 00:25:27.783 user 0m8.645s 00:25:27.783 sys 0m5.526s 00:25:27.783 12:53:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:27.783 12:53:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.783 ************************************ 00:25:27.783 END TEST nvmf_identify 00:25:27.783 ************************************ 00:25:27.783 12:53:00 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:27.783 12:53:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:27.783 12:53:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:27.783 12:53:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.783 ************************************ 00:25:27.783 START TEST nvmf_perf 00:25:27.783 ************************************ 00:25:27.783 12:53:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:28.044 * Looking for test storage... 00:25:28.044 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:28.044 12:53:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:28.044 12:53:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:28.044 12:53:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:28.044 12:53:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:28.044 12:53:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:28.044 12:53:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:28.045 12:53:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:28.045 12:53:01 -- scripts/common.sh@335 -- # IFS=.-: 00:25:28.045 12:53:01 -- scripts/common.sh@335 -- # read -ra ver1 00:25:28.045 12:53:01 -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.045 12:53:01 -- scripts/common.sh@336 -- # read -ra ver2 00:25:28.045 12:53:01 -- scripts/common.sh@337 -- # local 'op=<' 00:25:28.045 12:53:01 -- scripts/common.sh@339 -- # ver1_l=2 00:25:28.045 12:53:01 -- scripts/common.sh@340 -- # ver2_l=1 00:25:28.045 12:53:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:28.045 12:53:01 -- scripts/common.sh@343 -- # case "$op" in 00:25:28.045 12:53:01 -- scripts/common.sh@344 -- # : 1 00:25:28.045 12:53:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:28.045 12:53:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.045 12:53:01 -- scripts/common.sh@364 -- # decimal 1 00:25:28.045 12:53:01 -- scripts/common.sh@352 -- # local d=1 00:25:28.045 12:53:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.045 12:53:01 -- scripts/common.sh@354 -- # echo 1 00:25:28.045 12:53:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:28.045 12:53:01 -- scripts/common.sh@365 -- # decimal 2 00:25:28.045 12:53:01 -- scripts/common.sh@352 -- # local d=2 00:25:28.045 12:53:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.045 12:53:01 -- scripts/common.sh@354 -- # echo 2 00:25:28.045 12:53:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:28.045 12:53:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:28.045 12:53:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:28.045 12:53:01 -- scripts/common.sh@367 -- # return 0 00:25:28.045 12:53:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.045 12:53:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:28.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.045 --rc genhtml_branch_coverage=1 00:25:28.045 --rc genhtml_function_coverage=1 00:25:28.045 --rc genhtml_legend=1 00:25:28.045 --rc geninfo_all_blocks=1 00:25:28.045 --rc geninfo_unexecuted_blocks=1 00:25:28.045 00:25:28.045 ' 00:25:28.045 12:53:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:28.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.045 --rc genhtml_branch_coverage=1 00:25:28.045 --rc genhtml_function_coverage=1 00:25:28.045 --rc genhtml_legend=1 00:25:28.045 --rc geninfo_all_blocks=1 00:25:28.045 --rc geninfo_unexecuted_blocks=1 00:25:28.045 00:25:28.045 ' 00:25:28.045 12:53:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:28.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.045 --rc genhtml_branch_coverage=1 00:25:28.045 --rc genhtml_function_coverage=1 00:25:28.045 --rc genhtml_legend=1 00:25:28.045 --rc geninfo_all_blocks=1 00:25:28.045 --rc geninfo_unexecuted_blocks=1 00:25:28.045 00:25:28.045 ' 00:25:28.045 12:53:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:28.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.045 --rc genhtml_branch_coverage=1 00:25:28.045 --rc genhtml_function_coverage=1 00:25:28.045 --rc genhtml_legend=1 00:25:28.045 --rc geninfo_all_blocks=1 00:25:28.045 --rc geninfo_unexecuted_blocks=1 00:25:28.045 00:25:28.045 ' 00:25:28.045 12:53:01 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.045 12:53:01 -- nvmf/common.sh@7 -- # uname -s 00:25:28.045 12:53:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.045 12:53:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.045 12:53:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.045 12:53:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.045 12:53:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.045 12:53:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.045 12:53:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.045 12:53:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.045 12:53:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.045 12:53:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.045 12:53:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:28.045 12:53:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:28.045 12:53:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.045 12:53:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.045 12:53:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.045 12:53:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:28.045 12:53:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.045 12:53:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.045 12:53:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.045 12:53:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.045 12:53:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.045 12:53:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.045 12:53:01 -- paths/export.sh@5 -- # export PATH 00:25:28.045 12:53:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.045 12:53:01 -- nvmf/common.sh@46 -- # : 0 00:25:28.045 12:53:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:28.045 12:53:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:28.045 12:53:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:28.045 12:53:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.045 12:53:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.045 12:53:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:28.045 12:53:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:28.045 12:53:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:28.045 12:53:01 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:28.045 12:53:01 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:28.045 12:53:01 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:25:28.045 12:53:01 -- host/perf.sh@17 -- # nvmftestinit 00:25:28.045 12:53:01 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:28.045 12:53:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.045 12:53:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:28.045 12:53:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:28.045 12:53:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:28.045 12:53:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.045 12:53:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.045 12:53:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.045 12:53:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:28.045 12:53:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:28.045 12:53:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:28.045 12:53:01 -- common/autotest_common.sh@10 -- # set +x 00:25:36.187 12:53:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:36.187 12:53:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:36.187 12:53:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:36.187 12:53:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:36.187 12:53:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:36.187 12:53:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:36.187 12:53:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:36.187 12:53:08 -- nvmf/common.sh@294 -- # net_devs=() 00:25:36.187 12:53:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:36.187 12:53:08 -- nvmf/common.sh@295 -- # e810=() 00:25:36.187 12:53:08 -- nvmf/common.sh@295 -- # local -ga e810 00:25:36.187 12:53:08 -- nvmf/common.sh@296 -- # x722=() 00:25:36.187 12:53:08 -- nvmf/common.sh@296 -- # local -ga x722 00:25:36.187 12:53:08 -- nvmf/common.sh@297 -- # mlx=() 00:25:36.187 12:53:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:36.187 12:53:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.187 12:53:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:36.187 12:53:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:36.187 12:53:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:36.187 12:53:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:36.187 12:53:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:36.187 12:53:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:36.187 12:53:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:36.187 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:36.187 12:53:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:36.187 12:53:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:36.187 12:53:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:36.187 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:36.187 12:53:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:36.187 12:53:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:36.187 12:53:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:36.187 12:53:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.188 12:53:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:36.188 12:53:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.188 12:53:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:36.188 Found net devices under 0000:98:00.0: mlx_0_0 00:25:36.188 12:53:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.188 12:53:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.188 12:53:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:36.188 12:53:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.188 12:53:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:36.188 Found net devices under 0000:98:00.1: mlx_0_1 00:25:36.188 12:53:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.188 12:53:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:36.188 12:53:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:36.188 12:53:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:36.188 12:53:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:36.188 12:53:08 -- nvmf/common.sh@57 -- # uname 00:25:36.188 12:53:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:36.188 12:53:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:36.188 12:53:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:36.188 12:53:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:36.188 12:53:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:36.188 12:53:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:36.188 12:53:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:36.188 12:53:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:36.188 12:53:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:36.188 12:53:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:36.188 12:53:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:36.188 12:53:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:36.188 12:53:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:36.188 12:53:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:36.188 12:53:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:36.188 12:53:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:36.188 12:53:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:36.188 12:53:08 -- nvmf/common.sh@104 -- # continue 2 00:25:36.188 12:53:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:36.188 12:53:08 -- nvmf/common.sh@104 -- # continue 2 00:25:36.188 12:53:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:36.188 12:53:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:36.188 12:53:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:36.188 12:53:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:36.188 12:53:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:36.188 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:36.188 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:25:36.188 altname enp152s0f0np0 00:25:36.188 altname ens817f0np0 00:25:36.188 inet 192.168.100.8/24 scope global mlx_0_0 00:25:36.188 valid_lft forever preferred_lft forever 00:25:36.188 12:53:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:36.188 12:53:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:36.188 12:53:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:36.188 12:53:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:36.188 12:53:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:36.188 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:36.188 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:25:36.188 altname enp152s0f1np1 00:25:36.188 altname ens817f1np1 00:25:36.188 inet 192.168.100.9/24 scope global mlx_0_1 00:25:36.188 valid_lft forever preferred_lft forever 00:25:36.188 12:53:08 -- nvmf/common.sh@410 -- # return 0 00:25:36.188 12:53:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:36.188 12:53:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:36.188 12:53:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:36.188 12:53:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:36.188 12:53:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:36.188 12:53:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:36.188 12:53:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:36.188 12:53:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:36.188 12:53:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:36.188 12:53:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:36.188 12:53:08 -- nvmf/common.sh@104 -- # continue 2 00:25:36.188 12:53:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.188 12:53:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:36.188 12:53:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:36.188 12:53:08 -- nvmf/common.sh@104 -- # continue 2 00:25:36.188 12:53:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:36.188 12:53:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:36.188 12:53:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:36.188 12:53:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:36.188 12:53:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:36.188 12:53:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:36.188 12:53:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:36.188 12:53:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:36.188 192.168.100.9' 00:25:36.188 12:53:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:36.188 192.168.100.9' 00:25:36.188 12:53:08 -- nvmf/common.sh@445 -- # head -n 1 00:25:36.188 12:53:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:36.188 12:53:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:36.188 192.168.100.9' 00:25:36.188 12:53:08 -- nvmf/common.sh@446 -- # tail -n +2 00:25:36.188 12:53:08 -- nvmf/common.sh@446 -- # head -n 1 00:25:36.188 12:53:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:36.188 12:53:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:36.188 12:53:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:36.188 12:53:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:36.188 12:53:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:36.188 12:53:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:36.188 12:53:08 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:36.188 12:53:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:36.188 12:53:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:36.188 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:25:36.188 12:53:08 -- nvmf/common.sh@469 -- # nvmfpid=638545 00:25:36.188 12:53:08 -- nvmf/common.sh@470 -- # waitforlisten 638545 00:25:36.188 12:53:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:36.188 12:53:08 -- common/autotest_common.sh@829 -- # '[' -z 638545 ']' 00:25:36.188 12:53:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.188 12:53:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.188 12:53:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.188 12:53:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.188 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:25:36.188 [2024-11-20 12:53:08.316595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:36.188 [2024-11-20 12:53:08.316688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.188 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.188 [2024-11-20 12:53:08.384197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.188 [2024-11-20 12:53:08.456433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:36.188 [2024-11-20 12:53:08.456563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.188 [2024-11-20 12:53:08.456574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.188 [2024-11-20 12:53:08.456583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.188 [2024-11-20 12:53:08.456757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.188 [2024-11-20 12:53:08.456873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.188 [2024-11-20 12:53:08.457029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.189 [2024-11-20 12:53:08.457029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.189 12:53:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.189 12:53:09 -- common/autotest_common.sh@862 -- # return 0 00:25:36.189 12:53:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:36.189 12:53:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.189 12:53:09 -- common/autotest_common.sh@10 -- # set +x 00:25:36.189 12:53:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.189 12:53:09 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:36.189 12:53:09 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:36.761 12:53:09 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:36.761 12:53:09 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:36.761 12:53:09 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:36.761 12:53:09 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:37.021 12:53:09 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:37.021 12:53:09 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:37.021 12:53:09 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:37.021 12:53:09 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:25:37.021 12:53:09 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:25:37.281 [2024-11-20 12:53:10.132178] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:25:37.281 [2024-11-20 12:53:10.162793] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d3b710/0x1d496f0) succeed. 00:25:37.281 [2024-11-20 12:53:10.177659] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d3cd00/0x1d8ad90) succeed. 00:25:37.281 12:53:10 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.542 12:53:10 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:37.542 12:53:10 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:37.804 12:53:10 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:37.804 12:53:10 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:37.804 12:53:10 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:38.065 [2024-11-20 12:53:10.972034] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:38.065 12:53:11 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:38.325 12:53:11 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:38.325 12:53:11 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:38.325 12:53:11 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:38.325 12:53:11 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:39.708 Initializing NVMe Controllers 00:25:39.708 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:39.708 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:39.708 Initialization complete. Launching workers. 00:25:39.708 ======================================================== 00:25:39.708 Latency(us) 00:25:39.708 Device Information : IOPS MiB/s Average min max 00:25:39.708 PCIE (0000:65:00.0) NSID 1 from core 0: 80597.60 314.83 396.59 13.31 5255.30 00:25:39.708 ======================================================== 00:25:39.708 Total : 80597.60 314.83 396.59 13.31 5255.30 00:25:39.708 00:25:39.708 12:53:12 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:39.708 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.006 Initializing NVMe Controllers 00:25:43.006 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.006 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:43.006 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:43.006 Initialization complete. Launching workers. 00:25:43.006 ======================================================== 00:25:43.006 Latency(us) 00:25:43.006 Device Information : IOPS MiB/s Average min max 00:25:43.006 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9861.96 38.52 100.32 36.59 7307.18 00:25:43.006 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7238.97 28.28 137.32 53.66 7248.97 00:25:43.006 ======================================================== 00:25:43.006 Total : 17100.93 66.80 115.98 36.59 7307.18 00:25:43.006 00:25:43.006 12:53:15 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:43.006 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.306 Initializing NVMe Controllers 00:25:46.306 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:46.306 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:46.306 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:46.306 Initialization complete. Launching workers. 00:25:46.306 ======================================================== 00:25:46.306 Latency(us) 00:25:46.306 Device Information : IOPS MiB/s Average min max 00:25:46.306 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 21058.73 82.26 1519.94 389.17 6952.05 00:25:46.306 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4011.95 15.67 8034.99 5373.45 15091.88 00:25:46.306 ======================================================== 00:25:46.306 Total : 25070.67 97.93 2562.51 389.17 15091.88 00:25:46.306 00:25:46.306 12:53:19 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:25:46.306 12:53:19 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:46.306 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.596 Initializing NVMe Controllers 00:25:51.596 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.596 Controller IO queue size 128, less than required. 00:25:51.596 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.596 Controller IO queue size 128, less than required. 00:25:51.596 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.596 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:51.596 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:51.596 Initialization complete. Launching workers. 00:25:51.596 ======================================================== 00:25:51.596 Latency(us) 00:25:51.596 Device Information : IOPS MiB/s Average min max 00:25:51.596 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5125.73 1281.43 25059.92 9904.09 61263.95 00:25:51.596 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5227.70 1306.93 24274.41 10414.38 39203.39 00:25:51.596 ======================================================== 00:25:51.597 Total : 10353.43 2588.36 24663.30 9904.09 61263.95 00:25:51.597 00:25:51.597 12:53:23 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:25:51.597 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.597 No valid NVMe controllers or AIO or URING devices found 00:25:51.597 Initializing NVMe Controllers 00:25:51.597 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.597 Controller IO queue size 128, less than required. 00:25:51.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.597 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:51.597 Controller IO queue size 128, less than required. 00:25:51.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.597 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:51.597 WARNING: Some requested NVMe devices were skipped 00:25:51.597 12:53:24 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:25:51.597 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.802 Initializing NVMe Controllers 00:25:55.802 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:55.802 Controller IO queue size 128, less than required. 00:25:55.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:55.802 Controller IO queue size 128, less than required. 00:25:55.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:55.802 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:55.802 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:55.802 Initialization complete. Launching workers. 00:25:55.802 00:25:55.802 ==================== 00:25:55.802 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:55.802 RDMA transport: 00:25:55.802 dev name: mlx5_0 00:25:55.802 polls: 275975 00:25:55.802 idle_polls: 271591 00:25:55.802 completions: 54731 00:25:55.802 queued_requests: 1 00:25:55.802 total_send_wrs: 27429 00:25:55.802 send_doorbell_updates: 3960 00:25:55.802 total_recv_wrs: 27429 00:25:55.802 recv_doorbell_updates: 3960 00:25:55.802 --------------------------------- 00:25:55.802 00:25:55.802 ==================== 00:25:55.802 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:55.802 RDMA transport: 00:25:55.802 dev name: mlx5_0 00:25:55.802 polls: 272788 00:25:55.802 idle_polls: 272531 00:25:55.802 completions: 17911 00:25:55.802 queued_requests: 1 00:25:55.802 total_send_wrs: 9030 00:25:55.802 send_doorbell_updates: 248 00:25:55.802 total_recv_wrs: 9030 00:25:55.803 recv_doorbell_updates: 249 00:25:55.803 --------------------------------- 00:25:55.803 ======================================================== 00:25:55.803 Latency(us) 00:25:55.803 Device Information : IOPS MiB/s Average min max 00:25:55.803 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6880.58 1720.14 18600.66 7984.60 47705.17 00:25:55.803 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2286.20 571.55 56188.06 31503.19 83926.99 00:25:55.803 ======================================================== 00:25:55.803 Total : 9166.78 2291.69 27974.98 7984.60 83926.99 00:25:55.803 00:25:55.803 12:53:28 -- host/perf.sh@66 -- # sync 00:25:55.803 12:53:28 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.803 12:53:28 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:55.803 12:53:28 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:25:55.803 12:53:28 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:56.744 12:53:29 -- host/perf.sh@72 -- # ls_guid=434473dd-8406-4cea-9978-24e635ce71b6 00:25:56.744 12:53:29 -- host/perf.sh@73 -- # get_lvs_free_mb 434473dd-8406-4cea-9978-24e635ce71b6 00:25:56.744 12:53:29 -- common/autotest_common.sh@1353 -- # local lvs_uuid=434473dd-8406-4cea-9978-24e635ce71b6 00:25:56.744 12:53:29 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:56.744 12:53:29 -- common/autotest_common.sh@1355 -- # local fc 00:25:56.744 12:53:29 -- common/autotest_common.sh@1356 -- # local cs 00:25:56.744 12:53:29 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:57.005 12:53:29 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:57.005 { 00:25:57.005 "uuid": "434473dd-8406-4cea-9978-24e635ce71b6", 00:25:57.005 "name": "lvs_0", 00:25:57.006 "base_bdev": "Nvme0n1", 00:25:57.006 "total_data_clusters": 457407, 00:25:57.006 "free_clusters": 457407, 00:25:57.006 "block_size": 512, 00:25:57.006 "cluster_size": 4194304 00:25:57.006 } 00:25:57.006 ]' 00:25:57.006 12:53:29 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="434473dd-8406-4cea-9978-24e635ce71b6") .free_clusters' 00:25:57.006 12:53:30 -- common/autotest_common.sh@1358 -- # fc=457407 00:25:57.006 12:53:30 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="434473dd-8406-4cea-9978-24e635ce71b6") .cluster_size' 00:25:57.006 12:53:30 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:57.006 12:53:30 -- common/autotest_common.sh@1362 -- # free_mb=1829628 00:25:57.006 12:53:30 -- common/autotest_common.sh@1363 -- # echo 1829628 00:25:57.006 1829628 00:25:57.006 12:53:30 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:25:57.006 12:53:30 -- host/perf.sh@78 -- # free_mb=20480 00:25:57.006 12:53:30 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 434473dd-8406-4cea-9978-24e635ce71b6 lbd_0 20480 00:25:57.266 12:53:30 -- host/perf.sh@80 -- # lb_guid=374f37c3-42d1-4360-a4f5-79d09be42528 00:25:57.266 12:53:30 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 374f37c3-42d1-4360-a4f5-79d09be42528 lvs_n_0 00:25:59.178 12:53:31 -- host/perf.sh@83 -- # ls_nested_guid=47aaddee-ba22-4748-81eb-05890661616b 00:25:59.178 12:53:31 -- host/perf.sh@84 -- # get_lvs_free_mb 47aaddee-ba22-4748-81eb-05890661616b 00:25:59.178 12:53:31 -- common/autotest_common.sh@1353 -- # local lvs_uuid=47aaddee-ba22-4748-81eb-05890661616b 00:25:59.178 12:53:31 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:59.178 12:53:31 -- common/autotest_common.sh@1355 -- # local fc 00:25:59.178 12:53:31 -- common/autotest_common.sh@1356 -- # local cs 00:25:59.178 12:53:31 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:59.178 12:53:32 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:59.178 { 00:25:59.178 "uuid": "434473dd-8406-4cea-9978-24e635ce71b6", 00:25:59.178 "name": "lvs_0", 00:25:59.178 "base_bdev": "Nvme0n1", 00:25:59.178 "total_data_clusters": 457407, 00:25:59.178 "free_clusters": 452287, 00:25:59.178 "block_size": 512, 00:25:59.178 "cluster_size": 4194304 00:25:59.178 }, 00:25:59.178 { 00:25:59.178 "uuid": "47aaddee-ba22-4748-81eb-05890661616b", 00:25:59.178 "name": "lvs_n_0", 00:25:59.178 "base_bdev": "374f37c3-42d1-4360-a4f5-79d09be42528", 00:25:59.178 "total_data_clusters": 5114, 00:25:59.178 "free_clusters": 5114, 00:25:59.178 "block_size": 512, 00:25:59.178 "cluster_size": 4194304 00:25:59.178 } 00:25:59.179 ]' 00:25:59.179 12:53:32 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="47aaddee-ba22-4748-81eb-05890661616b") .free_clusters' 00:25:59.179 12:53:32 -- common/autotest_common.sh@1358 -- # fc=5114 00:25:59.179 12:53:32 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="47aaddee-ba22-4748-81eb-05890661616b") .cluster_size' 00:25:59.179 12:53:32 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:59.179 12:53:32 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:25:59.179 12:53:32 -- common/autotest_common.sh@1363 -- # echo 20456 00:25:59.179 20456 00:25:59.179 12:53:32 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:59.179 12:53:32 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47aaddee-ba22-4748-81eb-05890661616b lbd_nest_0 20456 00:25:59.440 12:53:32 -- host/perf.sh@88 -- # lb_nested_guid=5d62e01c-8e45-4350-b042-ac29a76d8801 00:25:59.440 12:53:32 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.440 12:53:32 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:59.440 12:53:32 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5d62e01c-8e45-4350-b042-ac29a76d8801 00:25:59.701 12:53:32 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:59.961 12:53:32 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:59.961 12:53:32 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:59.961 12:53:32 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:59.961 12:53:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:59.961 12:53:32 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:59.961 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.214 Initializing NVMe Controllers 00:26:12.214 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.214 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.214 Initialization complete. Launching workers. 00:26:12.215 ======================================================== 00:26:12.215 Latency(us) 00:26:12.215 Device Information : IOPS MiB/s Average min max 00:26:12.215 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6537.00 3.19 152.40 60.84 8018.04 00:26:12.215 ======================================================== 00:26:12.215 Total : 6537.00 3.19 152.40 60.84 8018.04 00:26:12.215 00:26:12.215 12:53:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:12.215 12:53:44 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:12.215 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.483 Initializing NVMe Controllers 00:26:24.483 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:24.483 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:24.483 Initialization complete. Launching workers. 00:26:24.483 ======================================================== 00:26:24.483 Latency(us) 00:26:24.483 Device Information : IOPS MiB/s Average min max 00:26:24.483 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3038.20 379.77 328.09 129.57 8150.10 00:26:24.483 ======================================================== 00:26:24.483 Total : 3038.20 379.77 328.09 129.57 8150.10 00:26:24.483 00:26:24.483 12:53:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:24.483 12:53:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:24.483 12:53:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:24.483 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.485 Initializing NVMe Controllers 00:26:34.485 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.485 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:34.485 Initialization complete. Launching workers. 00:26:34.485 ======================================================== 00:26:34.485 Latency(us) 00:26:34.485 Device Information : IOPS MiB/s Average min max 00:26:34.485 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13105.91 6.40 2441.25 679.56 9139.39 00:26:34.485 ======================================================== 00:26:34.485 Total : 13105.91 6.40 2441.25 679.56 9139.39 00:26:34.485 00:26:34.485 12:54:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:34.485 12:54:07 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:34.485 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.723 Initializing NVMe Controllers 00:26:46.723 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:46.723 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:46.723 Initialization complete. Launching workers. 00:26:46.723 ======================================================== 00:26:46.723 Latency(us) 00:26:46.723 Device Information : IOPS MiB/s Average min max 00:26:46.723 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3961.40 495.18 8082.66 4863.30 19941.80 00:26:46.723 ======================================================== 00:26:46.723 Total : 3961.40 495.18 8082.66 4863.30 19941.80 00:26:46.723 00:26:46.723 12:54:18 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:46.723 12:54:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:46.723 12:54:18 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:46.723 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.960 Initializing NVMe Controllers 00:26:58.960 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:58.960 Controller IO queue size 128, less than required. 00:26:58.960 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.960 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:58.960 Initialization complete. Launching workers. 00:26:58.960 ======================================================== 00:26:58.960 Latency(us) 00:26:58.960 Device Information : IOPS MiB/s Average min max 00:26:58.960 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20713.63 10.11 6182.04 1574.87 15595.30 00:26:58.960 ======================================================== 00:26:58.960 Total : 20713.63 10.11 6182.04 1574.87 15595.30 00:26:58.960 00:26:58.960 12:54:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:58.960 12:54:29 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:58.960 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.959 Initializing NVMe Controllers 00:27:08.959 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:08.959 Controller IO queue size 128, less than required. 00:27:08.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:08.959 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:08.959 Initialization complete. Launching workers. 00:27:08.959 ======================================================== 00:27:08.959 Latency(us) 00:27:08.959 Device Information : IOPS MiB/s Average min max 00:27:08.959 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12681.70 1585.21 10095.18 3210.62 22049.83 00:27:08.959 ======================================================== 00:27:08.959 Total : 12681.70 1585.21 10095.18 3210.62 22049.83 00:27:08.959 00:27:08.959 12:54:41 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.959 12:54:41 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5d62e01c-8e45-4350-b042-ac29a76d8801 00:27:10.341 12:54:43 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:10.341 12:54:43 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 374f37c3-42d1-4360-a4f5-79d09be42528 00:27:10.341 12:54:43 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:10.601 12:54:43 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:10.601 12:54:43 -- host/perf.sh@114 -- # nvmftestfini 00:27:10.601 12:54:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:10.601 12:54:43 -- nvmf/common.sh@116 -- # sync 00:27:10.601 12:54:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:10.601 12:54:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:10.601 12:54:43 -- nvmf/common.sh@119 -- # set +e 00:27:10.601 12:54:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:10.601 12:54:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:10.601 rmmod nvme_rdma 00:27:10.601 rmmod nvme_fabrics 00:27:10.601 12:54:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:10.601 12:54:43 -- nvmf/common.sh@123 -- # set -e 00:27:10.601 12:54:43 -- nvmf/common.sh@124 -- # return 0 00:27:10.601 12:54:43 -- nvmf/common.sh@477 -- # '[' -n 638545 ']' 00:27:10.601 12:54:43 -- nvmf/common.sh@478 -- # killprocess 638545 00:27:10.601 12:54:43 -- common/autotest_common.sh@936 -- # '[' -z 638545 ']' 00:27:10.601 12:54:43 -- common/autotest_common.sh@940 -- # kill -0 638545 00:27:10.601 12:54:43 -- common/autotest_common.sh@941 -- # uname 00:27:10.601 12:54:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:10.601 12:54:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 638545 00:27:10.861 12:54:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:10.861 12:54:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:10.861 12:54:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 638545' 00:27:10.861 killing process with pid 638545 00:27:10.861 12:54:43 -- common/autotest_common.sh@955 -- # kill 638545 00:27:10.861 12:54:43 -- common/autotest_common.sh@960 -- # wait 638545 00:27:12.783 12:54:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:12.783 12:54:45 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:12.783 00:27:12.783 real 1m44.898s 00:27:12.783 user 6m33.644s 00:27:12.783 sys 0m7.067s 00:27:12.783 12:54:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:12.783 12:54:45 -- common/autotest_common.sh@10 -- # set +x 00:27:12.783 ************************************ 00:27:12.783 END TEST nvmf_perf 00:27:12.783 ************************************ 00:27:12.783 12:54:45 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:12.783 12:54:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:12.783 12:54:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:12.783 12:54:45 -- common/autotest_common.sh@10 -- # set +x 00:27:12.783 ************************************ 00:27:12.783 START TEST nvmf_fio_host 00:27:12.783 ************************************ 00:27:12.783 12:54:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:13.044 * Looking for test storage... 00:27:13.044 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:13.044 12:54:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:13.044 12:54:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:13.044 12:54:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:13.044 12:54:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:13.044 12:54:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:13.044 12:54:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:13.044 12:54:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:13.044 12:54:45 -- scripts/common.sh@335 -- # IFS=.-: 00:27:13.044 12:54:45 -- scripts/common.sh@335 -- # read -ra ver1 00:27:13.044 12:54:45 -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.044 12:54:45 -- scripts/common.sh@336 -- # read -ra ver2 00:27:13.044 12:54:45 -- scripts/common.sh@337 -- # local 'op=<' 00:27:13.044 12:54:45 -- scripts/common.sh@339 -- # ver1_l=2 00:27:13.044 12:54:45 -- scripts/common.sh@340 -- # ver2_l=1 00:27:13.044 12:54:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:13.044 12:54:45 -- scripts/common.sh@343 -- # case "$op" in 00:27:13.044 12:54:45 -- scripts/common.sh@344 -- # : 1 00:27:13.044 12:54:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:13.044 12:54:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.044 12:54:45 -- scripts/common.sh@364 -- # decimal 1 00:27:13.044 12:54:45 -- scripts/common.sh@352 -- # local d=1 00:27:13.044 12:54:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.044 12:54:45 -- scripts/common.sh@354 -- # echo 1 00:27:13.044 12:54:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:13.044 12:54:45 -- scripts/common.sh@365 -- # decimal 2 00:27:13.044 12:54:45 -- scripts/common.sh@352 -- # local d=2 00:27:13.044 12:54:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.044 12:54:45 -- scripts/common.sh@354 -- # echo 2 00:27:13.044 12:54:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:13.044 12:54:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:13.044 12:54:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:13.044 12:54:45 -- scripts/common.sh@367 -- # return 0 00:27:13.044 12:54:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.044 12:54:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:13.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.044 --rc genhtml_branch_coverage=1 00:27:13.044 --rc genhtml_function_coverage=1 00:27:13.044 --rc genhtml_legend=1 00:27:13.044 --rc geninfo_all_blocks=1 00:27:13.044 --rc geninfo_unexecuted_blocks=1 00:27:13.044 00:27:13.044 ' 00:27:13.044 12:54:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:13.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.044 --rc genhtml_branch_coverage=1 00:27:13.044 --rc genhtml_function_coverage=1 00:27:13.044 --rc genhtml_legend=1 00:27:13.044 --rc geninfo_all_blocks=1 00:27:13.044 --rc geninfo_unexecuted_blocks=1 00:27:13.044 00:27:13.044 ' 00:27:13.044 12:54:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:13.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.044 --rc genhtml_branch_coverage=1 00:27:13.044 --rc genhtml_function_coverage=1 00:27:13.044 --rc genhtml_legend=1 00:27:13.044 --rc geninfo_all_blocks=1 00:27:13.044 --rc geninfo_unexecuted_blocks=1 00:27:13.044 00:27:13.044 ' 00:27:13.044 12:54:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:13.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.044 --rc genhtml_branch_coverage=1 00:27:13.044 --rc genhtml_function_coverage=1 00:27:13.044 --rc genhtml_legend=1 00:27:13.044 --rc geninfo_all_blocks=1 00:27:13.044 --rc geninfo_unexecuted_blocks=1 00:27:13.044 00:27:13.044 ' 00:27:13.044 12:54:46 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:13.044 12:54:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.044 12:54:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.044 12:54:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.044 12:54:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.044 12:54:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.044 12:54:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.044 12:54:46 -- paths/export.sh@5 -- # export PATH 00:27:13.044 12:54:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.044 12:54:46 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:13.044 12:54:46 -- nvmf/common.sh@7 -- # uname -s 00:27:13.044 12:54:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.044 12:54:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.045 12:54:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.045 12:54:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.045 12:54:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.045 12:54:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.045 12:54:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.045 12:54:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.045 12:54:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.045 12:54:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.045 12:54:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:13.045 12:54:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:13.045 12:54:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.045 12:54:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.045 12:54:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:13.045 12:54:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:13.045 12:54:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.045 12:54:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.045 12:54:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.045 12:54:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.045 12:54:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.045 12:54:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.045 12:54:46 -- paths/export.sh@5 -- # export PATH 00:27:13.045 12:54:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.045 12:54:46 -- nvmf/common.sh@46 -- # : 0 00:27:13.045 12:54:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:13.045 12:54:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:13.045 12:54:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:13.045 12:54:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.045 12:54:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.045 12:54:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:13.045 12:54:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:13.045 12:54:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:13.045 12:54:46 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:13.045 12:54:46 -- host/fio.sh@14 -- # nvmftestinit 00:27:13.045 12:54:46 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:13.045 12:54:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.045 12:54:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:13.045 12:54:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:13.045 12:54:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:13.045 12:54:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.045 12:54:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:13.045 12:54:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.045 12:54:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:13.045 12:54:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:13.045 12:54:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:13.045 12:54:46 -- common/autotest_common.sh@10 -- # set +x 00:27:21.184 12:54:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:21.184 12:54:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:21.184 12:54:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:21.184 12:54:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:21.184 12:54:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:21.184 12:54:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:21.184 12:54:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:21.184 12:54:53 -- nvmf/common.sh@294 -- # net_devs=() 00:27:21.184 12:54:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:21.184 12:54:53 -- nvmf/common.sh@295 -- # e810=() 00:27:21.184 12:54:53 -- nvmf/common.sh@295 -- # local -ga e810 00:27:21.184 12:54:53 -- nvmf/common.sh@296 -- # x722=() 00:27:21.184 12:54:53 -- nvmf/common.sh@296 -- # local -ga x722 00:27:21.184 12:54:53 -- nvmf/common.sh@297 -- # mlx=() 00:27:21.184 12:54:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:21.184 12:54:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.184 12:54:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.184 12:54:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.184 12:54:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.184 12:54:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.184 12:54:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.185 12:54:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.185 12:54:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.185 12:54:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.185 12:54:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.185 12:54:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.185 12:54:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:21.185 12:54:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:21.185 12:54:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:21.185 12:54:53 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:21.185 12:54:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:21.185 12:54:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:27:21.185 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:27:21.185 12:54:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:21.185 12:54:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:27:21.185 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:27:21.185 12:54:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:21.185 12:54:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:21.185 12:54:53 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.185 12:54:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:21.185 12:54:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.185 12:54:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:27:21.185 Found net devices under 0000:98:00.0: mlx_0_0 00:27:21.185 12:54:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.185 12:54:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.185 12:54:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:21.185 12:54:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.185 12:54:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:27:21.185 Found net devices under 0000:98:00.1: mlx_0_1 00:27:21.185 12:54:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.185 12:54:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:21.185 12:54:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:21.185 12:54:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:21.185 12:54:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:21.185 12:54:53 -- nvmf/common.sh@57 -- # uname 00:27:21.185 12:54:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:21.185 12:54:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:21.185 12:54:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:21.185 12:54:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:21.185 12:54:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:21.185 12:54:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:21.185 12:54:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:21.185 12:54:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:21.185 12:54:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:21.185 12:54:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:21.185 12:54:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:21.185 12:54:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:21.185 12:54:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:21.185 12:54:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:21.185 12:54:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:21.185 12:54:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:21.185 12:54:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:21.185 12:54:53 -- nvmf/common.sh@104 -- # continue 2 00:27:21.185 12:54:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:21.185 12:54:53 -- nvmf/common.sh@104 -- # continue 2 00:27:21.185 12:54:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:21.185 12:54:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:21.185 12:54:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:21.185 12:54:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:21.185 12:54:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:21.185 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:21.185 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:27:21.185 altname enp152s0f0np0 00:27:21.185 altname ens817f0np0 00:27:21.185 inet 192.168.100.8/24 scope global mlx_0_0 00:27:21.185 valid_lft forever preferred_lft forever 00:27:21.185 12:54:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:21.185 12:54:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:21.185 12:54:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:21.185 12:54:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:21.185 12:54:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:21.185 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:21.185 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:27:21.185 altname enp152s0f1np1 00:27:21.185 altname ens817f1np1 00:27:21.185 inet 192.168.100.9/24 scope global mlx_0_1 00:27:21.185 valid_lft forever preferred_lft forever 00:27:21.185 12:54:53 -- nvmf/common.sh@410 -- # return 0 00:27:21.185 12:54:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:21.185 12:54:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:21.185 12:54:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:21.185 12:54:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:21.185 12:54:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:21.185 12:54:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:21.185 12:54:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:21.185 12:54:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:21.185 12:54:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:21.185 12:54:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:21.185 12:54:53 -- nvmf/common.sh@104 -- # continue 2 00:27:21.185 12:54:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:21.185 12:54:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:21.185 12:54:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:21.185 12:54:53 -- nvmf/common.sh@104 -- # continue 2 00:27:21.185 12:54:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:21.185 12:54:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:21.185 12:54:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:21.185 12:54:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:21.185 12:54:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:21.185 12:54:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:21.185 12:54:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:21.185 12:54:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:21.185 192.168.100.9' 00:27:21.185 12:54:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:21.185 192.168.100.9' 00:27:21.185 12:54:53 -- nvmf/common.sh@445 -- # head -n 1 00:27:21.185 12:54:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:21.185 12:54:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:21.185 192.168.100.9' 00:27:21.185 12:54:53 -- nvmf/common.sh@446 -- # tail -n +2 00:27:21.185 12:54:53 -- nvmf/common.sh@446 -- # head -n 1 00:27:21.186 12:54:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:21.186 12:54:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:21.186 12:54:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:21.186 12:54:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:21.186 12:54:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:21.186 12:54:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:21.186 12:54:53 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:21.186 12:54:53 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:21.186 12:54:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:21.186 12:54:53 -- common/autotest_common.sh@10 -- # set +x 00:27:21.186 12:54:53 -- host/fio.sh@24 -- # nvmfpid=661795 00:27:21.186 12:54:53 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:21.186 12:54:53 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:21.186 12:54:53 -- host/fio.sh@28 -- # waitforlisten 661795 00:27:21.186 12:54:53 -- common/autotest_common.sh@829 -- # '[' -z 661795 ']' 00:27:21.186 12:54:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.186 12:54:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:21.186 12:54:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.186 12:54:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:21.186 12:54:53 -- common/autotest_common.sh@10 -- # set +x 00:27:21.186 [2024-11-20 12:54:53.309404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:21.186 [2024-11-20 12:54:53.309458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.186 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.186 [2024-11-20 12:54:53.371575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.186 [2024-11-20 12:54:53.437924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:21.186 [2024-11-20 12:54:53.438061] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.186 [2024-11-20 12:54:53.438072] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.186 [2024-11-20 12:54:53.438081] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.186 [2024-11-20 12:54:53.438164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.186 [2024-11-20 12:54:53.438285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.186 [2024-11-20 12:54:53.438440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.186 [2024-11-20 12:54:53.438441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.186 12:54:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.186 12:54:54 -- common/autotest_common.sh@862 -- # return 0 00:27:21.186 12:54:54 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:21.186 [2024-11-20 12:54:54.276564] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd3f7f0/0xd43ce0) succeed. 00:27:21.447 [2024-11-20 12:54:54.289731] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd40de0/0xd85380) succeed. 00:27:21.447 12:54:54 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:21.447 12:54:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:21.447 12:54:54 -- common/autotest_common.sh@10 -- # set +x 00:27:21.447 12:54:54 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:21.709 Malloc1 00:27:21.709 12:54:54 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:21.969 12:54:54 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:21.969 12:54:55 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:22.229 [2024-11-20 12:54:55.157350] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:22.229 12:54:55 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:22.489 12:54:55 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:22.489 12:54:55 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:22.489 12:54:55 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:22.489 12:54:55 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:22.489 12:54:55 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:22.489 12:54:55 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:22.489 12:54:55 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.489 12:54:55 -- common/autotest_common.sh@1330 -- # shift 00:27:22.489 12:54:55 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:22.489 12:54:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.489 12:54:55 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.489 12:54:55 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:22.489 12:54:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:22.489 12:54:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:22.489 12:54:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:22.489 12:54:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.489 12:54:55 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.489 12:54:55 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:22.489 12:54:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:22.489 12:54:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:22.489 12:54:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:22.489 12:54:55 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:22.489 12:54:55 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:22.749 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:22.749 fio-3.35 00:27:22.749 Starting 1 thread 00:27:22.749 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.312 00:27:25.312 test: (groupid=0, jobs=1): err= 0: pid=662362: Wed Nov 20 12:54:58 2024 00:27:25.312 read: IOPS=21.1k, BW=82.5MiB/s (86.5MB/s)(165MiB/2003msec) 00:27:25.312 slat (nsec): min=2017, max=32538, avg=2080.95, stdev=494.18 00:27:25.312 clat (usec): min=2246, max=5376, avg=3021.84, stdev=258.16 00:27:25.312 lat (usec): min=2271, max=5378, avg=3023.92, stdev=258.31 00:27:25.312 clat percentiles (usec): 00:27:25.312 | 1.00th=[ 2704], 5.00th=[ 2933], 10.00th=[ 2933], 20.00th=[ 2966], 00:27:25.312 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2966], 00:27:25.312 | 70.00th=[ 2999], 80.00th=[ 2999], 90.00th=[ 2999], 95.00th=[ 3195], 00:27:25.312 | 99.00th=[ 4293], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 5014], 00:27:25.312 | 99.99th=[ 5342] 00:27:25.312 bw ( KiB/s): min=78656, max=86976, per=100.00%, avg=84472.00, stdev=3899.12, samples=4 00:27:25.312 iops : min=19664, max=21744, avg=21118.00, stdev=974.78, samples=4 00:27:25.312 write: IOPS=21.0k, BW=82.0MiB/s (86.0MB/s)(164MiB/2003msec); 0 zone resets 00:27:25.312 slat (nsec): min=2066, max=14645, avg=2166.48, stdev=488.78 00:27:25.312 clat (usec): min=2650, max=5343, avg=3021.68, stdev=259.90 00:27:25.312 lat (usec): min=2652, max=5345, avg=3023.84, stdev=260.07 00:27:25.312 clat percentiles (usec): 00:27:25.312 | 1.00th=[ 2704], 5.00th=[ 2933], 10.00th=[ 2933], 20.00th=[ 2966], 00:27:25.312 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2966], 00:27:25.312 | 70.00th=[ 2999], 80.00th=[ 2999], 90.00th=[ 2999], 95.00th=[ 3195], 00:27:25.312 | 99.00th=[ 4293], 99.50th=[ 4293], 99.90th=[ 4621], 99.95th=[ 4948], 00:27:25.312 | 99.99th=[ 5276] 00:27:25.312 bw ( KiB/s): min=78488, max=86344, per=100.00%, avg=83964.00, stdev=3722.66, samples=4 00:27:25.312 iops : min=19622, max=21586, avg=20991.00, stdev=930.66, samples=4 00:27:25.312 lat (msec) : 4=96.36%, 10=3.64% 00:27:25.312 cpu : usr=99.65%, sys=0.00%, ctx=15, majf=0, minf=2 00:27:25.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:25.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:25.312 issued rwts: total=42301,42044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:25.312 00:27:25.312 Run status group 0 (all jobs): 00:27:25.312 READ: bw=82.5MiB/s (86.5MB/s), 82.5MiB/s-82.5MiB/s (86.5MB/s-86.5MB/s), io=165MiB (173MB), run=2003-2003msec 00:27:25.312 WRITE: bw=82.0MiB/s (86.0MB/s), 82.0MiB/s-82.0MiB/s (86.0MB/s-86.0MB/s), io=164MiB (172MB), run=2003-2003msec 00:27:25.312 12:54:58 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:25.312 12:54:58 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:25.312 12:54:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:25.312 12:54:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:25.312 12:54:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:25.312 12:54:58 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.312 12:54:58 -- common/autotest_common.sh@1330 -- # shift 00:27:25.312 12:54:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:25.312 12:54:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.312 12:54:58 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.312 12:54:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:25.312 12:54:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:25.312 12:54:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:25.312 12:54:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:25.312 12:54:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.312 12:54:58 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.312 12:54:58 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:25.312 12:54:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:25.312 12:54:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:25.312 12:54:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:25.312 12:54:58 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:25.312 12:54:58 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:25.573 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:25.573 fio-3.35 00:27:25.573 Starting 1 thread 00:27:25.573 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.115 00:27:28.115 test: (groupid=0, jobs=1): err= 0: pid=663170: Wed Nov 20 12:55:00 2024 00:27:28.115 read: IOPS=14.0k, BW=219MiB/s (229MB/s)(428MiB/1960msec) 00:27:28.115 slat (nsec): min=3352, max=55258, avg=3625.07, stdev=1198.48 00:27:28.115 clat (usec): min=308, max=10445, avg=3519.95, stdev=1956.00 00:27:28.115 lat (usec): min=312, max=10468, avg=3523.57, stdev=1956.17 00:27:28.115 clat percentiles (usec): 00:27:28.115 | 1.00th=[ 881], 5.00th=[ 1090], 10.00th=[ 1237], 20.00th=[ 1598], 00:27:28.115 | 30.00th=[ 1991], 40.00th=[ 2442], 50.00th=[ 3228], 60.00th=[ 3916], 00:27:28.115 | 70.00th=[ 4621], 80.00th=[ 5407], 90.00th=[ 6259], 95.00th=[ 7046], 00:27:28.115 | 99.00th=[ 8291], 99.50th=[ 8586], 99.90th=[ 9372], 99.95th=[ 9765], 00:27:28.115 | 99.99th=[10421] 00:27:28.115 bw ( KiB/s): min=103072, max=112352, per=48.13%, avg=107704.00, stdev=4308.12, samples=4 00:27:28.116 iops : min= 6442, max= 7022, avg=6731.50, stdev=269.26, samples=4 00:27:28.116 write: IOPS=7729, BW=121MiB/s (127MB/s)(219MiB/1810msec); 0 zone resets 00:27:28.116 slat (usec): min=39, max=146, avg=40.85, stdev= 6.86 00:27:28.116 clat (usec): min=321, max=24178, avg=9409.91, stdev=5195.06 00:27:28.116 lat (usec): min=361, max=24218, avg=9450.76, stdev=5195.26 00:27:28.116 clat percentiles (usec): 00:27:28.116 | 1.00th=[ 2114], 5.00th=[ 2737], 10.00th=[ 3261], 20.00th=[ 4113], 00:27:28.116 | 30.00th=[ 5211], 40.00th=[ 6390], 50.00th=[ 7963], 60.00th=[11731], 00:27:28.116 | 70.00th=[13829], 80.00th=[15008], 90.00th=[16319], 95.00th=[17433], 00:27:28.116 | 99.00th=[19268], 99.50th=[20055], 99.90th=[21365], 99.95th=[23462], 00:27:28.116 | 99.99th=[24249] 00:27:28.116 bw ( KiB/s): min=109024, max=114176, per=90.23%, avg=111600.00, stdev=2231.22, samples=4 00:27:28.116 iops : min= 6814, max= 7136, avg=6975.00, stdev=139.45, samples=4 00:27:28.116 lat (usec) : 500=0.07%, 750=0.17%, 1000=1.60% 00:27:28.116 lat (msec) : 2=18.37%, 4=26.55%, 10=38.60%, 20=14.46%, 50=0.18% 00:27:28.116 cpu : usr=96.65%, sys=1.05%, ctx=183, majf=0, minf=2 00:27:28.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:28.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:28.116 issued rwts: total=27411,13991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:28.116 00:27:28.116 Run status group 0 (all jobs): 00:27:28.116 READ: bw=219MiB/s (229MB/s), 219MiB/s-219MiB/s (229MB/s-229MB/s), io=428MiB (449MB), run=1960-1960msec 00:27:28.116 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=219MiB (229MB), run=1810-1810msec 00:27:28.116 12:55:00 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.116 12:55:01 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:28.116 12:55:01 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:28.116 12:55:01 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:28.116 12:55:01 -- common/autotest_common.sh@1508 -- # bdfs=() 00:27:28.116 12:55:01 -- common/autotest_common.sh@1508 -- # local bdfs 00:27:28.116 12:55:01 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:28.116 12:55:01 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:28.116 12:55:01 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:27:28.116 12:55:01 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:27:28.116 12:55:01 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:65:00.0 00:27:28.116 12:55:01 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 192.168.100.8 00:27:28.689 Nvme0n1 00:27:28.689 12:55:01 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:29.260 12:55:02 -- host/fio.sh@53 -- # ls_guid=324bf0c2-c4a4-46ae-8782-83da7bcc9e64 00:27:29.260 12:55:02 -- host/fio.sh@54 -- # get_lvs_free_mb 324bf0c2-c4a4-46ae-8782-83da7bcc9e64 00:27:29.260 12:55:02 -- common/autotest_common.sh@1353 -- # local lvs_uuid=324bf0c2-c4a4-46ae-8782-83da7bcc9e64 00:27:29.260 12:55:02 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:29.260 12:55:02 -- common/autotest_common.sh@1355 -- # local fc 00:27:29.260 12:55:02 -- common/autotest_common.sh@1356 -- # local cs 00:27:29.260 12:55:02 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:29.260 12:55:02 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:29.260 { 00:27:29.260 "uuid": "324bf0c2-c4a4-46ae-8782-83da7bcc9e64", 00:27:29.260 "name": "lvs_0", 00:27:29.260 "base_bdev": "Nvme0n1", 00:27:29.260 "total_data_clusters": 1787, 00:27:29.260 "free_clusters": 1787, 00:27:29.260 "block_size": 512, 00:27:29.260 "cluster_size": 1073741824 00:27:29.260 } 00:27:29.260 ]' 00:27:29.260 12:55:02 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="324bf0c2-c4a4-46ae-8782-83da7bcc9e64") .free_clusters' 00:27:29.520 12:55:02 -- common/autotest_common.sh@1358 -- # fc=1787 00:27:29.520 12:55:02 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="324bf0c2-c4a4-46ae-8782-83da7bcc9e64") .cluster_size' 00:27:29.520 12:55:02 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:27:29.520 12:55:02 -- common/autotest_common.sh@1362 -- # free_mb=1829888 00:27:29.520 12:55:02 -- common/autotest_common.sh@1363 -- # echo 1829888 00:27:29.520 1829888 00:27:29.520 12:55:02 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:27:29.520 72ae4b09-dd3c-4c40-bd20-6dac26250ba9 00:27:29.520 12:55:02 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:29.781 12:55:02 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:30.041 12:55:02 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:30.041 12:55:03 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:30.041 12:55:03 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:30.041 12:55:03 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:30.041 12:55:03 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:30.041 12:55:03 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:30.041 12:55:03 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.041 12:55:03 -- common/autotest_common.sh@1330 -- # shift 00:27:30.041 12:55:03 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:30.041 12:55:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.041 12:55:03 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.041 12:55:03 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:30.041 12:55:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:30.326 12:55:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:30.326 12:55:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:30.326 12:55:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.326 12:55:03 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.326 12:55:03 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:30.326 12:55:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:30.326 12:55:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:30.326 12:55:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:30.326 12:55:03 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:30.326 12:55:03 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:30.590 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:30.590 fio-3.35 00:27:30.590 Starting 1 thread 00:27:30.590 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.131 00:27:33.131 test: (groupid=0, jobs=1): err= 0: pid=664374: Wed Nov 20 12:55:05 2024 00:27:33.131 read: IOPS=14.3k, BW=55.7MiB/s (58.5MB/s)(112MiB/2004msec) 00:27:33.131 slat (nsec): min=2023, max=20985, avg=2143.17, stdev=217.90 00:27:33.131 clat (usec): min=2102, max=7369, avg=4437.59, stdev=107.22 00:27:33.131 lat (usec): min=2112, max=7371, avg=4439.73, stdev=107.20 00:27:33.131 clat percentiles (usec): 00:27:33.131 | 1.00th=[ 4424], 5.00th=[ 4424], 10.00th=[ 4424], 20.00th=[ 4424], 00:27:33.131 | 30.00th=[ 4424], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4424], 00:27:33.131 | 70.00th=[ 4424], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4490], 00:27:33.131 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 6063], 99.95th=[ 7177], 00:27:33.131 | 99.99th=[ 7308] 00:27:33.131 bw ( KiB/s): min=55072, max=57968, per=99.97%, avg=57066.00, stdev=1352.76, samples=4 00:27:33.131 iops : min=13768, max=14492, avg=14266.50, stdev=338.19, samples=4 00:27:33.131 write: IOPS=14.3k, BW=55.9MiB/s (58.6MB/s)(112MiB/2004msec); 0 zone resets 00:27:33.131 slat (nsec): min=2070, max=8045, avg=2229.92, stdev=192.41 00:27:33.131 clat (usec): min=2106, max=7361, avg=4423.09, stdev=95.20 00:27:33.131 lat (usec): min=2111, max=7363, avg=4425.32, stdev=95.18 00:27:33.131 clat percentiles (usec): 00:27:33.131 | 1.00th=[ 4359], 5.00th=[ 4424], 10.00th=[ 4424], 20.00th=[ 4424], 00:27:33.131 | 30.00th=[ 4424], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4424], 00:27:33.131 | 70.00th=[ 4424], 80.00th=[ 4424], 90.00th=[ 4424], 95.00th=[ 4424], 00:27:33.131 | 99.00th=[ 4490], 99.50th=[ 4490], 99.90th=[ 6063], 99.95th=[ 6259], 00:27:33.131 | 99.99th=[ 7308] 00:27:33.131 bw ( KiB/s): min=55464, max=58080, per=99.98%, avg=57186.00, stdev=1170.16, samples=4 00:27:33.131 iops : min=13866, max=14520, avg=14296.50, stdev=292.54, samples=4 00:27:33.131 lat (msec) : 4=0.34%, 10=99.66% 00:27:33.131 cpu : usr=99.60%, sys=0.05%, ctx=16, majf=0, minf=16 00:27:33.131 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:33.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:33.131 issued rwts: total=28598,28655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:33.131 00:27:33.131 Run status group 0 (all jobs): 00:27:33.131 READ: bw=55.7MiB/s (58.5MB/s), 55.7MiB/s-55.7MiB/s (58.5MB/s-58.5MB/s), io=112MiB (117MB), run=2004-2004msec 00:27:33.131 WRITE: bw=55.9MiB/s (58.6MB/s), 55.9MiB/s-55.9MiB/s (58.6MB/s-58.6MB/s), io=112MiB (117MB), run=2004-2004msec 00:27:33.131 12:55:05 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:33.131 12:55:06 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:34.069 12:55:07 -- host/fio.sh@64 -- # ls_nested_guid=52d10baa-f218-4174-9b52-279a4d9df113 00:27:34.069 12:55:07 -- host/fio.sh@65 -- # get_lvs_free_mb 52d10baa-f218-4174-9b52-279a4d9df113 00:27:34.069 12:55:07 -- common/autotest_common.sh@1353 -- # local lvs_uuid=52d10baa-f218-4174-9b52-279a4d9df113 00:27:34.069 12:55:07 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:34.069 12:55:07 -- common/autotest_common.sh@1355 -- # local fc 00:27:34.069 12:55:07 -- common/autotest_common.sh@1356 -- # local cs 00:27:34.069 12:55:07 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:34.328 12:55:07 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:34.328 { 00:27:34.328 "uuid": "324bf0c2-c4a4-46ae-8782-83da7bcc9e64", 00:27:34.328 "name": "lvs_0", 00:27:34.328 "base_bdev": "Nvme0n1", 00:27:34.328 "total_data_clusters": 1787, 00:27:34.328 "free_clusters": 0, 00:27:34.328 "block_size": 512, 00:27:34.328 "cluster_size": 1073741824 00:27:34.328 }, 00:27:34.328 { 00:27:34.328 "uuid": "52d10baa-f218-4174-9b52-279a4d9df113", 00:27:34.328 "name": "lvs_n_0", 00:27:34.328 "base_bdev": "72ae4b09-dd3c-4c40-bd20-6dac26250ba9", 00:27:34.328 "total_data_clusters": 457025, 00:27:34.328 "free_clusters": 457025, 00:27:34.329 "block_size": 512, 00:27:34.329 "cluster_size": 4194304 00:27:34.329 } 00:27:34.329 ]' 00:27:34.329 12:55:07 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="52d10baa-f218-4174-9b52-279a4d9df113") .free_clusters' 00:27:34.329 12:55:07 -- common/autotest_common.sh@1358 -- # fc=457025 00:27:34.329 12:55:07 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="52d10baa-f218-4174-9b52-279a4d9df113") .cluster_size' 00:27:34.329 12:55:07 -- common/autotest_common.sh@1359 -- # cs=4194304 00:27:34.329 12:55:07 -- common/autotest_common.sh@1362 -- # free_mb=1828100 00:27:34.329 12:55:07 -- common/autotest_common.sh@1363 -- # echo 1828100 00:27:34.329 1828100 00:27:34.329 12:55:07 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:27:35.268 002b4242-7276-46fb-9ae1-9d5f3b33be55 00:27:35.268 12:55:08 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:35.529 12:55:08 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:35.789 12:55:08 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:35.789 12:55:08 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:35.789 12:55:08 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:35.789 12:55:08 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:35.789 12:55:08 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:35.789 12:55:08 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:35.789 12:55:08 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:35.789 12:55:08 -- common/autotest_common.sh@1330 -- # shift 00:27:35.789 12:55:08 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:35.789 12:55:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:35.789 12:55:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:35.789 12:55:08 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:35.789 12:55:08 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:35.790 12:55:08 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:35.790 12:55:08 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:35.790 12:55:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:35.790 12:55:08 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:35.790 12:55:08 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:35.790 12:55:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:35.790 12:55:08 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:35.790 12:55:08 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:35.790 12:55:08 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:35.790 12:55:08 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:36.361 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:36.361 fio-3.35 00:27:36.361 Starting 1 thread 00:27:36.361 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.908 00:27:38.908 test: (groupid=0, jobs=1): err= 0: pid=665569: Wed Nov 20 12:55:11 2024 00:27:38.908 read: IOPS=8221, BW=32.1MiB/s (33.7MB/s)(64.4MiB/2006msec) 00:27:38.908 slat (nsec): min=2028, max=20986, avg=2138.48, stdev=244.79 00:27:38.908 clat (usec): min=4751, max=12694, avg=7717.80, stdev=239.41 00:27:38.908 lat (usec): min=4760, max=12697, avg=7719.94, stdev=239.38 00:27:38.908 clat percentiles (usec): 00:27:38.908 | 1.00th=[ 7635], 5.00th=[ 7701], 10.00th=[ 7701], 20.00th=[ 7701], 00:27:38.908 | 30.00th=[ 7701], 40.00th=[ 7701], 50.00th=[ 7701], 60.00th=[ 7701], 00:27:38.908 | 70.00th=[ 7701], 80.00th=[ 7767], 90.00th=[ 7767], 95.00th=[ 7767], 00:27:38.908 | 99.00th=[ 7963], 99.50th=[ 8225], 99.90th=[12387], 99.95th=[12649], 00:27:38.908 | 99.99th=[12649] 00:27:38.908 bw ( KiB/s): min=31496, max=33616, per=99.89%, avg=32850.00, stdev=940.73, samples=4 00:27:38.908 iops : min= 7874, max= 8404, avg=8212.50, stdev=235.18, samples=4 00:27:38.908 write: IOPS=8224, BW=32.1MiB/s (33.7MB/s)(64.4MiB/2006msec); 0 zone resets 00:27:38.908 slat (nsec): min=2080, max=13672, avg=2247.42, stdev=225.76 00:27:38.908 clat (usec): min=4734, max=12686, avg=7699.75, stdev=228.70 00:27:38.908 lat (usec): min=4737, max=12688, avg=7702.00, stdev=228.66 00:27:38.908 clat percentiles (usec): 00:27:38.908 | 1.00th=[ 7635], 5.00th=[ 7635], 10.00th=[ 7635], 20.00th=[ 7701], 00:27:38.908 | 30.00th=[ 7701], 40.00th=[ 7701], 50.00th=[ 7701], 60.00th=[ 7701], 00:27:38.908 | 70.00th=[ 7701], 80.00th=[ 7701], 90.00th=[ 7701], 95.00th=[ 7767], 00:27:38.908 | 99.00th=[ 7963], 99.50th=[ 8586], 99.90th=[10683], 99.95th=[12518], 00:27:38.908 | 99.99th=[12649] 00:27:38.908 bw ( KiB/s): min=32256, max=33344, per=99.94%, avg=32880.00, stdev=453.87, samples=4 00:27:38.908 iops : min= 8064, max= 8336, avg=8220.00, stdev=113.47, samples=4 00:27:38.908 lat (msec) : 10=99.81%, 20=0.19% 00:27:38.908 cpu : usr=99.50%, sys=0.15%, ctx=15, majf=0, minf=16 00:27:38.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:38.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:38.908 issued rwts: total=16493,16499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:38.908 00:27:38.908 Run status group 0 (all jobs): 00:27:38.908 READ: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.4MiB (67.6MB), run=2006-2006msec 00:27:38.908 WRITE: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.4MiB (67.6MB), run=2006-2006msec 00:27:38.908 12:55:11 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:38.908 12:55:11 -- host/fio.sh@74 -- # sync 00:27:38.908 12:55:11 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:40.814 12:55:13 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:41.073 12:55:14 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:41.643 12:55:14 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:41.904 12:55:14 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:43.815 12:55:16 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:43.815 12:55:16 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:43.815 12:55:16 -- host/fio.sh@86 -- # nvmftestfini 00:27:43.815 12:55:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:43.815 12:55:16 -- nvmf/common.sh@116 -- # sync 00:27:43.815 12:55:16 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:43.815 12:55:16 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:43.815 12:55:16 -- nvmf/common.sh@119 -- # set +e 00:27:43.815 12:55:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:43.815 12:55:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:43.815 rmmod nvme_rdma 00:27:43.815 rmmod nvme_fabrics 00:27:43.815 12:55:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:43.815 12:55:16 -- nvmf/common.sh@123 -- # set -e 00:27:43.815 12:55:16 -- nvmf/common.sh@124 -- # return 0 00:27:43.815 12:55:16 -- nvmf/common.sh@477 -- # '[' -n 661795 ']' 00:27:43.815 12:55:16 -- nvmf/common.sh@478 -- # killprocess 661795 00:27:43.815 12:55:16 -- common/autotest_common.sh@936 -- # '[' -z 661795 ']' 00:27:43.815 12:55:16 -- common/autotest_common.sh@940 -- # kill -0 661795 00:27:43.815 12:55:16 -- common/autotest_common.sh@941 -- # uname 00:27:43.815 12:55:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:43.815 12:55:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 661795 00:27:43.815 12:55:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:43.815 12:55:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:43.815 12:55:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 661795' 00:27:43.815 killing process with pid 661795 00:27:43.815 12:55:16 -- common/autotest_common.sh@955 -- # kill 661795 00:27:43.815 12:55:16 -- common/autotest_common.sh@960 -- # wait 661795 00:27:44.075 12:55:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:44.075 12:55:17 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:44.075 00:27:44.075 real 0m31.286s 00:27:44.075 user 2m46.309s 00:27:44.075 sys 0m7.539s 00:27:44.075 12:55:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:44.075 12:55:17 -- common/autotest_common.sh@10 -- # set +x 00:27:44.075 ************************************ 00:27:44.075 END TEST nvmf_fio_host 00:27:44.075 ************************************ 00:27:44.075 12:55:17 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:44.075 12:55:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:44.076 12:55:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:44.076 12:55:17 -- common/autotest_common.sh@10 -- # set +x 00:27:44.076 ************************************ 00:27:44.076 START TEST nvmf_failover 00:27:44.076 ************************************ 00:27:44.076 12:55:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:44.337 * Looking for test storage... 00:27:44.337 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:44.337 12:55:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:44.337 12:55:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:44.337 12:55:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:44.337 12:55:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:44.337 12:55:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:44.337 12:55:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:44.337 12:55:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:44.337 12:55:17 -- scripts/common.sh@335 -- # IFS=.-: 00:27:44.337 12:55:17 -- scripts/common.sh@335 -- # read -ra ver1 00:27:44.337 12:55:17 -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.337 12:55:17 -- scripts/common.sh@336 -- # read -ra ver2 00:27:44.337 12:55:17 -- scripts/common.sh@337 -- # local 'op=<' 00:27:44.337 12:55:17 -- scripts/common.sh@339 -- # ver1_l=2 00:27:44.337 12:55:17 -- scripts/common.sh@340 -- # ver2_l=1 00:27:44.337 12:55:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:44.337 12:55:17 -- scripts/common.sh@343 -- # case "$op" in 00:27:44.337 12:55:17 -- scripts/common.sh@344 -- # : 1 00:27:44.337 12:55:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:44.337 12:55:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.337 12:55:17 -- scripts/common.sh@364 -- # decimal 1 00:27:44.337 12:55:17 -- scripts/common.sh@352 -- # local d=1 00:27:44.337 12:55:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.337 12:55:17 -- scripts/common.sh@354 -- # echo 1 00:27:44.337 12:55:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:44.337 12:55:17 -- scripts/common.sh@365 -- # decimal 2 00:27:44.337 12:55:17 -- scripts/common.sh@352 -- # local d=2 00:27:44.337 12:55:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.337 12:55:17 -- scripts/common.sh@354 -- # echo 2 00:27:44.337 12:55:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:44.337 12:55:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:44.337 12:55:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:44.337 12:55:17 -- scripts/common.sh@367 -- # return 0 00:27:44.337 12:55:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.337 12:55:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:44.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.337 --rc genhtml_branch_coverage=1 00:27:44.337 --rc genhtml_function_coverage=1 00:27:44.337 --rc genhtml_legend=1 00:27:44.337 --rc geninfo_all_blocks=1 00:27:44.337 --rc geninfo_unexecuted_blocks=1 00:27:44.337 00:27:44.337 ' 00:27:44.337 12:55:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:44.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.337 --rc genhtml_branch_coverage=1 00:27:44.337 --rc genhtml_function_coverage=1 00:27:44.337 --rc genhtml_legend=1 00:27:44.337 --rc geninfo_all_blocks=1 00:27:44.337 --rc geninfo_unexecuted_blocks=1 00:27:44.337 00:27:44.337 ' 00:27:44.337 12:55:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:44.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.337 --rc genhtml_branch_coverage=1 00:27:44.337 --rc genhtml_function_coverage=1 00:27:44.337 --rc genhtml_legend=1 00:27:44.337 --rc geninfo_all_blocks=1 00:27:44.337 --rc geninfo_unexecuted_blocks=1 00:27:44.337 00:27:44.337 ' 00:27:44.337 12:55:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:44.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.337 --rc genhtml_branch_coverage=1 00:27:44.337 --rc genhtml_function_coverage=1 00:27:44.337 --rc genhtml_legend=1 00:27:44.337 --rc geninfo_all_blocks=1 00:27:44.337 --rc geninfo_unexecuted_blocks=1 00:27:44.337 00:27:44.337 ' 00:27:44.337 12:55:17 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.337 12:55:17 -- nvmf/common.sh@7 -- # uname -s 00:27:44.337 12:55:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.337 12:55:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.337 12:55:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.337 12:55:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.337 12:55:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.337 12:55:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.337 12:55:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.337 12:55:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.337 12:55:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.337 12:55:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.337 12:55:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:44.337 12:55:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:44.337 12:55:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.337 12:55:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.337 12:55:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.337 12:55:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:44.337 12:55:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.337 12:55:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.337 12:55:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.337 12:55:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.337 12:55:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.337 12:55:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.337 12:55:17 -- paths/export.sh@5 -- # export PATH 00:27:44.337 12:55:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.337 12:55:17 -- nvmf/common.sh@46 -- # : 0 00:27:44.337 12:55:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:44.337 12:55:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:44.337 12:55:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:44.338 12:55:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.338 12:55:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.338 12:55:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:44.338 12:55:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:44.338 12:55:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:44.338 12:55:17 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:44.338 12:55:17 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:44.338 12:55:17 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:44.338 12:55:17 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:44.338 12:55:17 -- host/failover.sh@18 -- # nvmftestinit 00:27:44.338 12:55:17 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:44.338 12:55:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.338 12:55:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:44.338 12:55:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:44.338 12:55:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:44.338 12:55:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.338 12:55:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.338 12:55:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.338 12:55:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:44.338 12:55:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:44.338 12:55:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:44.338 12:55:17 -- common/autotest_common.sh@10 -- # set +x 00:27:52.477 12:55:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:52.477 12:55:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:52.477 12:55:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:52.477 12:55:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:52.477 12:55:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:52.477 12:55:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:52.477 12:55:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:52.477 12:55:24 -- nvmf/common.sh@294 -- # net_devs=() 00:27:52.477 12:55:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:52.477 12:55:24 -- nvmf/common.sh@295 -- # e810=() 00:27:52.477 12:55:24 -- nvmf/common.sh@295 -- # local -ga e810 00:27:52.477 12:55:24 -- nvmf/common.sh@296 -- # x722=() 00:27:52.477 12:55:24 -- nvmf/common.sh@296 -- # local -ga x722 00:27:52.477 12:55:24 -- nvmf/common.sh@297 -- # mlx=() 00:27:52.477 12:55:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:52.477 12:55:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.477 12:55:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:52.477 12:55:24 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:52.477 12:55:24 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:52.478 12:55:24 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:52.478 12:55:24 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:52.478 12:55:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:52.478 12:55:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:27:52.478 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:27:52.478 12:55:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:52.478 12:55:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:27:52.478 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:27:52.478 12:55:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:52.478 12:55:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:52.478 12:55:24 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.478 12:55:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:52.478 12:55:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.478 12:55:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:27:52.478 Found net devices under 0000:98:00.0: mlx_0_0 00:27:52.478 12:55:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.478 12:55:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.478 12:55:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:52.478 12:55:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.478 12:55:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:27:52.478 Found net devices under 0000:98:00.1: mlx_0_1 00:27:52.478 12:55:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.478 12:55:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:52.478 12:55:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:52.478 12:55:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:52.478 12:55:24 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:52.478 12:55:24 -- nvmf/common.sh@57 -- # uname 00:27:52.478 12:55:24 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:52.478 12:55:24 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:52.478 12:55:24 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:52.478 12:55:24 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:52.478 12:55:24 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:52.478 12:55:24 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:52.478 12:55:24 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:52.478 12:55:24 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:52.478 12:55:24 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:52.478 12:55:24 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:52.478 12:55:24 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:52.478 12:55:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:52.478 12:55:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:52.478 12:55:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:52.478 12:55:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:52.478 12:55:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:52.478 12:55:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:52.478 12:55:24 -- nvmf/common.sh@104 -- # continue 2 00:27:52.478 12:55:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:52.478 12:55:24 -- nvmf/common.sh@104 -- # continue 2 00:27:52.478 12:55:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:52.478 12:55:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:52.478 12:55:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:52.478 12:55:24 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:52.478 12:55:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:52.478 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:52.478 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:27:52.478 altname enp152s0f0np0 00:27:52.478 altname ens817f0np0 00:27:52.478 inet 192.168.100.8/24 scope global mlx_0_0 00:27:52.478 valid_lft forever preferred_lft forever 00:27:52.478 12:55:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:52.478 12:55:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:52.478 12:55:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:52.478 12:55:24 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:52.478 12:55:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:52.478 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:52.478 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:27:52.478 altname enp152s0f1np1 00:27:52.478 altname ens817f1np1 00:27:52.478 inet 192.168.100.9/24 scope global mlx_0_1 00:27:52.478 valid_lft forever preferred_lft forever 00:27:52.478 12:55:24 -- nvmf/common.sh@410 -- # return 0 00:27:52.478 12:55:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:52.478 12:55:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:52.478 12:55:24 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:52.478 12:55:24 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:52.478 12:55:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:52.478 12:55:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:52.478 12:55:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:52.478 12:55:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:52.478 12:55:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:52.478 12:55:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:52.478 12:55:24 -- nvmf/common.sh@104 -- # continue 2 00:27:52.478 12:55:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.478 12:55:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:52.478 12:55:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:52.478 12:55:24 -- nvmf/common.sh@104 -- # continue 2 00:27:52.478 12:55:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:52.478 12:55:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:52.478 12:55:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:52.478 12:55:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:52.478 12:55:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:52.478 12:55:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:52.478 12:55:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:52.478 12:55:24 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:52.478 192.168.100.9' 00:27:52.478 12:55:24 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:52.478 192.168.100.9' 00:27:52.478 12:55:24 -- nvmf/common.sh@445 -- # head -n 1 00:27:52.478 12:55:24 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:52.478 12:55:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:52.478 192.168.100.9' 00:27:52.478 12:55:24 -- nvmf/common.sh@446 -- # tail -n +2 00:27:52.478 12:55:24 -- nvmf/common.sh@446 -- # head -n 1 00:27:52.478 12:55:24 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:52.479 12:55:24 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:52.479 12:55:24 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:52.479 12:55:24 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:52.479 12:55:24 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:52.479 12:55:24 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:52.479 12:55:24 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:52.479 12:55:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:52.479 12:55:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:52.479 12:55:24 -- common/autotest_common.sh@10 -- # set +x 00:27:52.479 12:55:24 -- nvmf/common.sh@469 -- # nvmfpid=670940 00:27:52.479 12:55:24 -- nvmf/common.sh@470 -- # waitforlisten 670940 00:27:52.479 12:55:24 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:52.479 12:55:24 -- common/autotest_common.sh@829 -- # '[' -z 670940 ']' 00:27:52.479 12:55:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.479 12:55:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:52.479 12:55:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.479 12:55:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:52.479 12:55:24 -- common/autotest_common.sh@10 -- # set +x 00:27:52.479 [2024-11-20 12:55:24.648587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:52.479 [2024-11-20 12:55:24.648660] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.479 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.479 [2024-11-20 12:55:24.707881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:52.479 [2024-11-20 12:55:24.771979] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:52.479 [2024-11-20 12:55:24.772086] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.479 [2024-11-20 12:55:24.772092] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.479 [2024-11-20 12:55:24.772098] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.479 [2024-11-20 12:55:24.772196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.479 [2024-11-20 12:55:24.772461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.479 [2024-11-20 12:55:24.772462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.479 12:55:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.479 12:55:25 -- common/autotest_common.sh@862 -- # return 0 00:27:52.479 12:55:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:52.479 12:55:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:52.479 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:27:52.479 12:55:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.479 12:55:25 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:52.739 [2024-11-20 12:55:25.690498] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a39fa0/0x1a3e490) succeed. 00:27:52.739 [2024-11-20 12:55:25.701215] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a3b4f0/0x1a7fb30) succeed. 00:27:52.739 12:55:25 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:53.000 Malloc0 00:27:53.000 12:55:26 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:53.260 12:55:26 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:53.260 12:55:26 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:53.521 [2024-11-20 12:55:26.482489] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:53.521 12:55:26 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:53.781 [2024-11-20 12:55:26.650695] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:53.781 12:55:26 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:53.781 [2024-11-20 12:55:26.815223] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:53.781 12:55:26 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:53.781 12:55:26 -- host/failover.sh@31 -- # bdevperf_pid=671326 00:27:53.781 12:55:26 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:53.781 12:55:26 -- host/failover.sh@34 -- # waitforlisten 671326 /var/tmp/bdevperf.sock 00:27:53.781 12:55:26 -- common/autotest_common.sh@829 -- # '[' -z 671326 ']' 00:27:53.781 12:55:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.781 12:55:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.781 12:55:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.781 12:55:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.781 12:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:54.722 12:55:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.722 12:55:27 -- common/autotest_common.sh@862 -- # return 0 00:27:54.722 12:55:27 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.982 NVMe0n1 00:27:54.982 12:55:27 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.242 00:27:55.242 12:55:28 -- host/failover.sh@39 -- # run_test_pid=671660 00:27:55.242 12:55:28 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:55.242 12:55:28 -- host/failover.sh@41 -- # sleep 1 00:27:56.181 12:55:29 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:56.441 12:55:29 -- host/failover.sh@45 -- # sleep 3 00:27:59.741 12:55:32 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:59.741 00:27:59.741 12:55:32 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:59.741 12:55:32 -- host/failover.sh@50 -- # sleep 3 00:28:03.041 12:55:35 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:03.041 [2024-11-20 12:55:35.923778] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:03.041 12:55:35 -- host/failover.sh@55 -- # sleep 1 00:28:03.981 12:55:36 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:04.242 12:55:37 -- host/failover.sh@59 -- # wait 671660 00:28:10.828 0 00:28:10.828 12:55:43 -- host/failover.sh@61 -- # killprocess 671326 00:28:10.828 12:55:43 -- common/autotest_common.sh@936 -- # '[' -z 671326 ']' 00:28:10.828 12:55:43 -- common/autotest_common.sh@940 -- # kill -0 671326 00:28:10.829 12:55:43 -- common/autotest_common.sh@941 -- # uname 00:28:10.829 12:55:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:10.829 12:55:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 671326 00:28:10.829 12:55:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:10.829 12:55:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:10.829 12:55:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 671326' 00:28:10.829 killing process with pid 671326 00:28:10.829 12:55:43 -- common/autotest_common.sh@955 -- # kill 671326 00:28:10.829 12:55:43 -- common/autotest_common.sh@960 -- # wait 671326 00:28:10.829 12:55:43 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:10.829 [2024-11-20 12:55:26.879163] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:10.829 [2024-11-20 12:55:26.879220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671326 ] 00:28:10.829 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.829 [2024-11-20 12:55:26.940213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.829 [2024-11-20 12:55:27.002288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.829 Running I/O for 15 seconds... 00:28:10.829 [2024-11-20 12:55:30.327997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.829 [2024-11-20 12:55:30.328043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.829 [2024-11-20 12:55:30.328072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.829 [2024-11-20 12:55:30.328109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.829 [2024-11-20 12:55:30.328160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.829 [2024-11-20 12:55:30.328212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x182800 00:28:10.829 [2024-11-20 12:55:30.328498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.829 [2024-11-20 12:55:30.328531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183000 00:28:10.829 [2024-11-20 12:55:30.328583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.829 [2024-11-20 12:55:30.328592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.328600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x182800 00:28:10.830 [2024-11-20 12:55:30.328617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.328636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x182800 00:28:10.830 [2024-11-20 12:55:30.328653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.328670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182800 00:28:10.830 [2024-11-20 12:55:30.328687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.328704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.328721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.328737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182800 00:28:10.830 [2024-11-20 12:55:30.328755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.328771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.328788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.328805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.328825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.328841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.328858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x182800 00:28:10.830 [2024-11-20 12:55:30.328875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.328892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x182800 00:28:10.830 [2024-11-20 12:55:30.328909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.328926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x182800 00:28:10.830 [2024-11-20 12:55:30.328943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x182800 00:28:10.830 [2024-11-20 12:55:30.328960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x182800 00:28:10.830 [2024-11-20 12:55:30.328977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.328991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.328999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.329016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.329035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.329052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.329069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.329086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.329103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.329120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.329137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.329155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.329171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x182800 00:28:10.830 [2024-11-20 12:55:30.329189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.830 [2024-11-20 12:55:30.329206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183000 00:28:10.830 [2024-11-20 12:55:30.329223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.830 [2024-11-20 12:55:30.329235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x182800 00:28:10.831 [2024-11-20 12:55:30.329259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x182800 00:28:10.831 [2024-11-20 12:55:30.329276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x182800 00:28:10.831 [2024-11-20 12:55:30.329344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x182800 00:28:10.831 [2024-11-20 12:55:30.329447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x182800 00:28:10.831 [2024-11-20 12:55:30.329549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x182800 00:28:10.831 [2024-11-20 12:55:30.329582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ce480 len:0x1000 key:0x182800 00:28:10.831 [2024-11-20 12:55:30.329720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183000 00:28:10.831 [2024-11-20 12:55:30.329753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182800 00:28:10.831 [2024-11-20 12:55:30.329771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x182800 00:28:10.831 [2024-11-20 12:55:30.329840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.831 [2024-11-20 12:55:30.329857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.831 [2024-11-20 12:55:30.329867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:30.329874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.329884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:30.329891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.329901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:30.329908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.329918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x182800 00:28:10.832 [2024-11-20 12:55:30.329925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.329935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.832 [2024-11-20 12:55:30.329942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.329952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:30.329959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.329968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x182800 00:28:10.832 [2024-11-20 12:55:30.329976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.329989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x182800 00:28:10.832 [2024-11-20 12:55:30.329997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:30.330013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:30.330032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x182800 00:28:10.832 [2024-11-20 12:55:30.330049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182800 00:28:10.832 [2024-11-20 12:55:30.330066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:30.330083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x182800 00:28:10.832 [2024-11-20 12:55:30.330100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.832 [2024-11-20 12:55:30.330117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.832 [2024-11-20 12:55:30.330134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:30.330151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.832 [2024-11-20 12:55:30.330167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.832 [2024-11-20 12:55:30.330184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x182800 00:28:10.832 [2024-11-20 12:55:30.330201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:30.330218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.330228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x182800 00:28:10.832 [2024-11-20 12:55:30.330236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.332563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:10.832 [2024-11-20 12:55:30.332576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.832 [2024-11-20 12:55:30.332583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104912 len:8 PRP1 0x0 PRP2 0x0 00:28:10.832 [2024-11-20 12:55:30.332592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:30.332627] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:10.832 [2024-11-20 12:55:30.332643] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:10.832 [2024-11-20 12:55:30.332653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.832 [2024-11-20 12:55:30.335014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.832 [2024-11-20 12:55:30.354995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:10.832 [2024-11-20 12:55:30.397109] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:10.832 [2024-11-20 12:55:33.749160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182b00 00:28:10.832 [2024-11-20 12:55:33.749201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x182b00 00:28:10.832 [2024-11-20 12:55:33.749227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:33.749245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:33.749263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182b00 00:28:10.832 [2024-11-20 12:55:33.749280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.832 [2024-11-20 12:55:33.749297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.832 [2024-11-20 12:55:33.749314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:33.749336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:33.749353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183000 00:28:10.832 [2024-11-20 12:55:33.749370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182b00 00:28:10.832 [2024-11-20 12:55:33.749388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.832 [2024-11-20 12:55:33.749397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x182b00 00:28:10.833 [2024-11-20 12:55:33.749422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.833 [2024-11-20 12:55:33.749457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x182b00 00:28:10.833 [2024-11-20 12:55:33.749473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x182b00 00:28:10.833 [2024-11-20 12:55:33.749490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.833 [2024-11-20 12:55:33.749526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x182b00 00:28:10.833 [2024-11-20 12:55:33.749544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.833 [2024-11-20 12:55:33.749578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x182b00 00:28:10.833 [2024-11-20 12:55:33.749595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182b00 00:28:10.833 [2024-11-20 12:55:33.749612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.833 [2024-11-20 12:55:33.749628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.833 [2024-11-20 12:55:33.749645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ce480 len:0x1000 key:0x182b00 00:28:10.833 [2024-11-20 12:55:33.749679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182b00 00:28:10.833 [2024-11-20 12:55:33.749696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.833 [2024-11-20 12:55:33.749732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.833 [2024-11-20 12:55:33.749748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x182b00 00:28:10.833 [2024-11-20 12:55:33.749782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.833 [2024-11-20 12:55:33.749798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x182b00 00:28:10.833 [2024-11-20 12:55:33.749814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183000 00:28:10.833 [2024-11-20 12:55:33.749882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.833 [2024-11-20 12:55:33.749899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.833 [2024-11-20 12:55:33.749908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.833 [2024-11-20 12:55:33.749915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.749926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.749933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.749942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.749950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.749959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.749967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.749976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.749987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.749997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.750021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.750054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.750071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.750105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.750191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.750275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.750292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.750377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.750495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.834 [2024-11-20 12:55:33.750512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183000 00:28:10.834 [2024-11-20 12:55:33.750529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.834 [2024-11-20 12:55:33.750539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x182b00 00:28:10.834 [2024-11-20 12:55:33.750546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.750562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.750647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.750680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.750714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183000 00:28:10.835 [2024-11-20 12:55:33.750748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183000 00:28:10.835 [2024-11-20 12:55:33.750765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.750799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.750816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183000 00:28:10.835 [2024-11-20 12:55:33.750833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183000 00:28:10.835 [2024-11-20 12:55:33.750866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183000 00:28:10.835 [2024-11-20 12:55:33.750884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.750936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.750953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.750970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183000 00:28:10.835 [2024-11-20 12:55:33.750990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.750999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183000 00:28:10.835 [2024-11-20 12:55:33.751007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.751016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.751023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.751032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.751040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.751049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.751056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.751066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f8900 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.751073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.751082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.751090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.751099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x182b00 00:28:10.835 [2024-11-20 12:55:33.751107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.751118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183000 00:28:10.835 [2024-11-20 12:55:33.751125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.751134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.751141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.751151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183000 00:28:10.835 [2024-11-20 12:55:33.751158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.835 [2024-11-20 12:55:33.751167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.835 [2024-11-20 12:55:33.751174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:33.751192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x182b00 00:28:10.836 [2024-11-20 12:55:33.751209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x182b00 00:28:10.836 [2024-11-20 12:55:33.751226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f0500 len:0x1000 key:0x182b00 00:28:10.836 [2024-11-20 12:55:33.751243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:33.751260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:33.751277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:33.751294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:33.751312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x182b00 00:28:10.836 [2024-11-20 12:55:33.751329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:33.751345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.751354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.836 [2024-11-20 12:55:33.751362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.753662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:10.836 [2024-11-20 12:55:33.753675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.836 [2024-11-20 12:55:33.753682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101992 len:8 PRP1 0x0 PRP2 0x0 00:28:10.836 [2024-11-20 12:55:33.753690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:33.753722] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:10.836 [2024-11-20 12:55:33.753731] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:28:10.836 [2024-11-20 12:55:33.753740] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.836 [2024-11-20 12:55:33.756234] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.836 [2024-11-20 12:55:33.775874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:10.836 [2024-11-20 12:55:33.821367] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:10.836 [2024-11-20 12:55:38.112020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x182800 00:28:10.836 [2024-11-20 12:55:38.112060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182800 00:28:10.836 [2024-11-20 12:55:38.112088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:38.112105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x182800 00:28:10.836 [2024-11-20 12:55:38.112123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:38.112146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.836 [2024-11-20 12:55:38.112164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.836 [2024-11-20 12:55:38.112181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x182800 00:28:10.836 [2024-11-20 12:55:38.112197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:38.112215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.836 [2024-11-20 12:55:38.112231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x182800 00:28:10.836 [2024-11-20 12:55:38.112248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x182800 00:28:10.836 [2024-11-20 12:55:38.112266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:38.112283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.836 [2024-11-20 12:55:38.112301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:38.112318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:38.112337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.836 [2024-11-20 12:55:38.112354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182800 00:28:10.836 [2024-11-20 12:55:38.112371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:38.112389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:38.112406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:38.112423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 12:55:38.112433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183000 00:28:10.836 [2024-11-20 12:55:38.112441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.837 [2024-11-20 12:55:38.112476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.837 [2024-11-20 12:55:38.112509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.837 [2024-11-20 12:55:38.112525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.837 [2024-11-20 12:55:38.112598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.837 [2024-11-20 12:55:38.112614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.837 [2024-11-20 12:55:38.112683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.837 [2024-11-20 12:55:38.112717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183000 00:28:10.837 [2024-11-20 12:55:38.112858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.112961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.837 [2024-11-20 12:55:38.112978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.112992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.837 [2024-11-20 12:55:38.112999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.113009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x182800 00:28:10.837 [2024-11-20 12:55:38.113016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 12:55:38.113026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183000 00:28:10.838 [2024-11-20 12:55:38.113050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183000 00:28:10.838 [2024-11-20 12:55:38.113134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183000 00:28:10.838 [2024-11-20 12:55:38.113169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183000 00:28:10.838 [2024-11-20 12:55:38.113238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183000 00:28:10.838 [2024-11-20 12:55:38.113305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183000 00:28:10.838 [2024-11-20 12:55:38.113407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183000 00:28:10.838 [2024-11-20 12:55:38.113474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183000 00:28:10.838 [2024-11-20 12:55:38.113491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.838 [2024-11-20 12:55:38.113592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183000 00:28:10.838 [2024-11-20 12:55:38.113626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182800 00:28:10.838 [2024-11-20 12:55:38.113643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.838 [2024-11-20 12:55:38.113652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.113660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.113676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183000 00:28:10.839 [2024-11-20 12:55:38.113693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.113710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013879b80 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.113727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.113745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.113762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.113779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.113796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f8900 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.113813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183000 00:28:10.839 [2024-11-20 12:55:38.113830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.113847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.113864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.113881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183000 00:28:10.839 [2024-11-20 12:55:38.113897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183000 00:28:10.839 [2024-11-20 12:55:38.113914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183000 00:28:10.839 [2024-11-20 12:55:38.113933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.113949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.113966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.113986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.113996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.114003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183000 00:28:10.839 [2024-11-20 12:55:38.114020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.114037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.114053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.114070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.114087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.114103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.114120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.114137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183000 00:28:10.839 [2024-11-20 12:55:38.114155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.114171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.114188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.114205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.839 [2024-11-20 12:55:38.114221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.114231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x182800 00:28:10.839 [2024-11-20 12:55:38.114238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d4851000 sqhd:5310 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.116643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:10.839 [2024-11-20 12:55:38.116655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.839 [2024-11-20 12:55:38.116662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59048 len:8 PRP1 0x0 PRP2 0x0 00:28:10.839 [2024-11-20 12:55:38.116670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.839 [2024-11-20 12:55:38.116700] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:10.839 [2024-11-20 12:55:38.116709] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:28:10.839 [2024-11-20 12:55:38.116717] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.840 [2024-11-20 12:55:38.119028] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.840 [2024-11-20 12:55:38.138936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:10.840 [2024-11-20 12:55:38.182849] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:10.840 00:28:10.840 Latency(us) 00:28:10.840 [2024-11-20T11:55:43.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.840 [2024-11-20T11:55:43.948Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:10.840 Verification LBA range: start 0x0 length 0x4000 00:28:10.840 NVMe0n1 : 15.01 21753.30 84.97 289.30 0.00 5792.55 344.75 1020613.97 00:28:10.840 [2024-11-20T11:55:43.948Z] =================================================================================================================== 00:28:10.840 [2024-11-20T11:55:43.948Z] Total : 21753.30 84.97 289.30 0.00 5792.55 344.75 1020613.97 00:28:10.840 Received shutdown signal, test time was about 15.000000 seconds 00:28:10.840 00:28:10.840 Latency(us) 00:28:10.840 [2024-11-20T11:55:43.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.840 [2024-11-20T11:55:43.948Z] =================================================================================================================== 00:28:10.840 [2024-11-20T11:55:43.948Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.840 12:55:43 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:10.840 12:55:43 -- host/failover.sh@65 -- # count=3 00:28:10.840 12:55:43 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:10.840 12:55:43 -- host/failover.sh@73 -- # bdevperf_pid=674592 00:28:10.840 12:55:43 -- host/failover.sh@75 -- # waitforlisten 674592 /var/tmp/bdevperf.sock 00:28:10.840 12:55:43 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:10.840 12:55:43 -- common/autotest_common.sh@829 -- # '[' -z 674592 ']' 00:28:10.840 12:55:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:10.840 12:55:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:10.840 12:55:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:10.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:10.840 12:55:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:10.840 12:55:43 -- common/autotest_common.sh@10 -- # set +x 00:28:11.411 12:55:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.411 12:55:44 -- common/autotest_common.sh@862 -- # return 0 00:28:11.411 12:55:44 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:11.671 [2024-11-20 12:55:44.523147] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:11.671 12:55:44 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:11.671 [2024-11-20 12:55:44.687682] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:11.671 12:55:44 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:11.933 NVMe0n1 00:28:11.933 12:55:44 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:12.194 00:28:12.194 12:55:45 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:12.454 00:28:12.454 12:55:45 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:12.454 12:55:45 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:12.714 12:55:45 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:12.714 12:55:45 -- host/failover.sh@87 -- # sleep 3 00:28:16.017 12:55:48 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:16.017 12:55:48 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:16.017 12:55:48 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:16.017 12:55:48 -- host/failover.sh@90 -- # run_test_pid=675675 00:28:16.017 12:55:48 -- host/failover.sh@92 -- # wait 675675 00:28:16.957 0 00:28:16.957 12:55:50 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:16.957 [2024-11-20 12:55:43.610612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:16.957 [2024-11-20 12:55:43.610684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674592 ] 00:28:16.957 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.957 [2024-11-20 12:55:43.672218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.957 [2024-11-20 12:55:43.733354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.957 [2024-11-20 12:55:45.749361] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:16.957 [2024-11-20 12:55:45.750037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:16.957 [2024-11-20 12:55:45.750068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:16.957 [2024-11-20 12:55:45.775729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.957 [2024-11-20 12:55:45.801765] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:16.957 Running I/O for 1 seconds... 00:28:16.957 00:28:16.957 Latency(us) 00:28:16.957 [2024-11-20T11:55:50.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.957 [2024-11-20T11:55:50.065Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:16.957 Verification LBA range: start 0x0 length 0x4000 00:28:16.957 NVMe0n1 : 1.00 27584.48 107.75 0.00 0.00 4614.38 928.43 14745.60 00:28:16.957 [2024-11-20T11:55:50.065Z] =================================================================================================================== 00:28:16.957 [2024-11-20T11:55:50.065Z] Total : 27584.48 107.75 0.00 0.00 4614.38 928.43 14745.60 00:28:16.957 12:55:50 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:16.957 12:55:50 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:17.218 12:55:50 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:17.479 12:55:50 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:17.479 12:55:50 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:17.740 12:55:50 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:17.740 12:55:50 -- host/failover.sh@101 -- # sleep 3 00:28:21.038 12:55:53 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:21.038 12:55:53 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:21.038 12:55:53 -- host/failover.sh@108 -- # killprocess 674592 00:28:21.039 12:55:53 -- common/autotest_common.sh@936 -- # '[' -z 674592 ']' 00:28:21.039 12:55:53 -- common/autotest_common.sh@940 -- # kill -0 674592 00:28:21.039 12:55:53 -- common/autotest_common.sh@941 -- # uname 00:28:21.039 12:55:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:21.039 12:55:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 674592 00:28:21.039 12:55:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:21.039 12:55:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:21.039 12:55:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 674592' 00:28:21.039 killing process with pid 674592 00:28:21.039 12:55:54 -- common/autotest_common.sh@955 -- # kill 674592 00:28:21.039 12:55:54 -- common/autotest_common.sh@960 -- # wait 674592 00:28:21.299 12:55:54 -- host/failover.sh@110 -- # sync 00:28:21.299 12:55:54 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.299 12:55:54 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:21.299 12:55:54 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:21.299 12:55:54 -- host/failover.sh@116 -- # nvmftestfini 00:28:21.299 12:55:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:21.299 12:55:54 -- nvmf/common.sh@116 -- # sync 00:28:21.299 12:55:54 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:21.299 12:55:54 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:21.299 12:55:54 -- nvmf/common.sh@119 -- # set +e 00:28:21.299 12:55:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:21.299 12:55:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:21.299 rmmod nvme_rdma 00:28:21.299 rmmod nvme_fabrics 00:28:21.299 12:55:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:21.299 12:55:54 -- nvmf/common.sh@123 -- # set -e 00:28:21.299 12:55:54 -- nvmf/common.sh@124 -- # return 0 00:28:21.299 12:55:54 -- nvmf/common.sh@477 -- # '[' -n 670940 ']' 00:28:21.299 12:55:54 -- nvmf/common.sh@478 -- # killprocess 670940 00:28:21.299 12:55:54 -- common/autotest_common.sh@936 -- # '[' -z 670940 ']' 00:28:21.299 12:55:54 -- common/autotest_common.sh@940 -- # kill -0 670940 00:28:21.299 12:55:54 -- common/autotest_common.sh@941 -- # uname 00:28:21.299 12:55:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:21.299 12:55:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 670940 00:28:21.560 12:55:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:21.560 12:55:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:21.560 12:55:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 670940' 00:28:21.560 killing process with pid 670940 00:28:21.560 12:55:54 -- common/autotest_common.sh@955 -- # kill 670940 00:28:21.560 12:55:54 -- common/autotest_common.sh@960 -- # wait 670940 00:28:21.560 12:55:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:21.560 12:55:54 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:21.560 00:28:21.560 real 0m37.490s 00:28:21.560 user 2m2.901s 00:28:21.560 sys 0m7.138s 00:28:21.560 12:55:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:21.560 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.560 ************************************ 00:28:21.560 END TEST nvmf_failover 00:28:21.560 ************************************ 00:28:21.821 12:55:54 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:21.821 12:55:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:21.821 12:55:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:21.821 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.821 ************************************ 00:28:21.821 START TEST nvmf_discovery 00:28:21.821 ************************************ 00:28:21.821 12:55:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:21.821 * Looking for test storage... 00:28:21.821 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:21.821 12:55:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:21.821 12:55:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:21.821 12:55:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:21.821 12:55:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:21.821 12:55:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:21.821 12:55:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:21.821 12:55:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:21.821 12:55:54 -- scripts/common.sh@335 -- # IFS=.-: 00:28:21.821 12:55:54 -- scripts/common.sh@335 -- # read -ra ver1 00:28:21.821 12:55:54 -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.821 12:55:54 -- scripts/common.sh@336 -- # read -ra ver2 00:28:21.821 12:55:54 -- scripts/common.sh@337 -- # local 'op=<' 00:28:21.821 12:55:54 -- scripts/common.sh@339 -- # ver1_l=2 00:28:21.821 12:55:54 -- scripts/common.sh@340 -- # ver2_l=1 00:28:21.821 12:55:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:21.821 12:55:54 -- scripts/common.sh@343 -- # case "$op" in 00:28:21.821 12:55:54 -- scripts/common.sh@344 -- # : 1 00:28:21.821 12:55:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:21.821 12:55:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.821 12:55:54 -- scripts/common.sh@364 -- # decimal 1 00:28:21.821 12:55:54 -- scripts/common.sh@352 -- # local d=1 00:28:21.821 12:55:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.821 12:55:54 -- scripts/common.sh@354 -- # echo 1 00:28:21.821 12:55:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:21.821 12:55:54 -- scripts/common.sh@365 -- # decimal 2 00:28:21.821 12:55:54 -- scripts/common.sh@352 -- # local d=2 00:28:21.821 12:55:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.821 12:55:54 -- scripts/common.sh@354 -- # echo 2 00:28:21.821 12:55:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:21.821 12:55:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:21.821 12:55:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:21.821 12:55:54 -- scripts/common.sh@367 -- # return 0 00:28:21.821 12:55:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.821 12:55:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:21.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.821 --rc genhtml_branch_coverage=1 00:28:21.821 --rc genhtml_function_coverage=1 00:28:21.821 --rc genhtml_legend=1 00:28:21.821 --rc geninfo_all_blocks=1 00:28:21.821 --rc geninfo_unexecuted_blocks=1 00:28:21.821 00:28:21.821 ' 00:28:21.821 12:55:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:21.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.821 --rc genhtml_branch_coverage=1 00:28:21.821 --rc genhtml_function_coverage=1 00:28:21.821 --rc genhtml_legend=1 00:28:21.821 --rc geninfo_all_blocks=1 00:28:21.821 --rc geninfo_unexecuted_blocks=1 00:28:21.821 00:28:21.821 ' 00:28:21.821 12:55:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:21.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.821 --rc genhtml_branch_coverage=1 00:28:21.821 --rc genhtml_function_coverage=1 00:28:21.821 --rc genhtml_legend=1 00:28:21.821 --rc geninfo_all_blocks=1 00:28:21.821 --rc geninfo_unexecuted_blocks=1 00:28:21.821 00:28:21.821 ' 00:28:21.821 12:55:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:21.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.821 --rc genhtml_branch_coverage=1 00:28:21.821 --rc genhtml_function_coverage=1 00:28:21.821 --rc genhtml_legend=1 00:28:21.821 --rc geninfo_all_blocks=1 00:28:21.821 --rc geninfo_unexecuted_blocks=1 00:28:21.821 00:28:21.821 ' 00:28:21.822 12:55:54 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.822 12:55:54 -- nvmf/common.sh@7 -- # uname -s 00:28:21.822 12:55:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.822 12:55:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.822 12:55:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.822 12:55:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.822 12:55:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.822 12:55:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.822 12:55:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.822 12:55:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.822 12:55:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.822 12:55:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.822 12:55:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:21.822 12:55:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:21.822 12:55:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.822 12:55:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.822 12:55:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.822 12:55:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:21.822 12:55:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.822 12:55:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.822 12:55:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.822 12:55:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.822 12:55:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.822 12:55:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.822 12:55:54 -- paths/export.sh@5 -- # export PATH 00:28:21.822 12:55:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.822 12:55:54 -- nvmf/common.sh@46 -- # : 0 00:28:21.822 12:55:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:21.822 12:55:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:21.822 12:55:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:21.822 12:55:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.822 12:55:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.822 12:55:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:21.822 12:55:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:21.822 12:55:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:21.822 12:55:54 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:28:21.822 12:55:54 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:21.822 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:21.822 12:55:54 -- host/discovery.sh@13 -- # exit 0 00:28:21.822 00:28:21.822 real 0m0.214s 00:28:21.822 user 0m0.126s 00:28:21.822 sys 0m0.102s 00:28:21.822 12:55:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:21.822 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.822 ************************************ 00:28:21.822 END TEST nvmf_discovery 00:28:21.822 ************************************ 00:28:22.084 12:55:54 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:22.084 12:55:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:22.084 12:55:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:22.084 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:28:22.084 ************************************ 00:28:22.084 START TEST nvmf_discovery_remove_ifc 00:28:22.084 ************************************ 00:28:22.084 12:55:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:22.084 * Looking for test storage... 00:28:22.084 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:22.084 12:55:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:22.084 12:55:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:22.084 12:55:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:22.084 12:55:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:22.084 12:55:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:22.084 12:55:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:22.084 12:55:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:22.084 12:55:55 -- scripts/common.sh@335 -- # IFS=.-: 00:28:22.084 12:55:55 -- scripts/common.sh@335 -- # read -ra ver1 00:28:22.084 12:55:55 -- scripts/common.sh@336 -- # IFS=.-: 00:28:22.084 12:55:55 -- scripts/common.sh@336 -- # read -ra ver2 00:28:22.084 12:55:55 -- scripts/common.sh@337 -- # local 'op=<' 00:28:22.084 12:55:55 -- scripts/common.sh@339 -- # ver1_l=2 00:28:22.084 12:55:55 -- scripts/common.sh@340 -- # ver2_l=1 00:28:22.084 12:55:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:22.084 12:55:55 -- scripts/common.sh@343 -- # case "$op" in 00:28:22.084 12:55:55 -- scripts/common.sh@344 -- # : 1 00:28:22.084 12:55:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:22.084 12:55:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:22.084 12:55:55 -- scripts/common.sh@364 -- # decimal 1 00:28:22.084 12:55:55 -- scripts/common.sh@352 -- # local d=1 00:28:22.084 12:55:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:22.084 12:55:55 -- scripts/common.sh@354 -- # echo 1 00:28:22.084 12:55:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:22.084 12:55:55 -- scripts/common.sh@365 -- # decimal 2 00:28:22.084 12:55:55 -- scripts/common.sh@352 -- # local d=2 00:28:22.084 12:55:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:22.084 12:55:55 -- scripts/common.sh@354 -- # echo 2 00:28:22.084 12:55:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:22.084 12:55:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:22.084 12:55:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:22.084 12:55:55 -- scripts/common.sh@367 -- # return 0 00:28:22.084 12:55:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:22.084 12:55:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.084 --rc genhtml_branch_coverage=1 00:28:22.084 --rc genhtml_function_coverage=1 00:28:22.084 --rc genhtml_legend=1 00:28:22.084 --rc geninfo_all_blocks=1 00:28:22.084 --rc geninfo_unexecuted_blocks=1 00:28:22.084 00:28:22.084 ' 00:28:22.084 12:55:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.084 --rc genhtml_branch_coverage=1 00:28:22.084 --rc genhtml_function_coverage=1 00:28:22.084 --rc genhtml_legend=1 00:28:22.084 --rc geninfo_all_blocks=1 00:28:22.084 --rc geninfo_unexecuted_blocks=1 00:28:22.084 00:28:22.084 ' 00:28:22.084 12:55:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.084 --rc genhtml_branch_coverage=1 00:28:22.084 --rc genhtml_function_coverage=1 00:28:22.084 --rc genhtml_legend=1 00:28:22.084 --rc geninfo_all_blocks=1 00:28:22.084 --rc geninfo_unexecuted_blocks=1 00:28:22.084 00:28:22.084 ' 00:28:22.084 12:55:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.084 --rc genhtml_branch_coverage=1 00:28:22.084 --rc genhtml_function_coverage=1 00:28:22.084 --rc genhtml_legend=1 00:28:22.084 --rc geninfo_all_blocks=1 00:28:22.084 --rc geninfo_unexecuted_blocks=1 00:28:22.084 00:28:22.084 ' 00:28:22.084 12:55:55 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.084 12:55:55 -- nvmf/common.sh@7 -- # uname -s 00:28:22.084 12:55:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.084 12:55:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.084 12:55:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.084 12:55:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.084 12:55:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.084 12:55:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.084 12:55:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.084 12:55:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.084 12:55:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.084 12:55:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.084 12:55:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:22.085 12:55:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:22.085 12:55:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.085 12:55:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.085 12:55:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.085 12:55:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:22.085 12:55:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.085 12:55:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.085 12:55:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.085 12:55:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.085 12:55:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.085 12:55:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.085 12:55:55 -- paths/export.sh@5 -- # export PATH 00:28:22.085 12:55:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.085 12:55:55 -- nvmf/common.sh@46 -- # : 0 00:28:22.085 12:55:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:22.085 12:55:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:22.085 12:55:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:22.085 12:55:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.085 12:55:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.085 12:55:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:22.085 12:55:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:22.085 12:55:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:22.085 12:55:55 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:28:22.085 12:55:55 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:22.085 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:22.085 12:55:55 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:28:22.085 00:28:22.085 real 0m0.215s 00:28:22.085 user 0m0.139s 00:28:22.085 sys 0m0.088s 00:28:22.085 12:55:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:22.085 12:55:55 -- common/autotest_common.sh@10 -- # set +x 00:28:22.085 ************************************ 00:28:22.085 END TEST nvmf_discovery_remove_ifc 00:28:22.085 ************************************ 00:28:22.347 12:55:55 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:28:22.347 12:55:55 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:22.347 12:55:55 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:22.347 12:55:55 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:22.347 12:55:55 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:22.347 12:55:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:22.347 12:55:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:22.347 12:55:55 -- common/autotest_common.sh@10 -- # set +x 00:28:22.347 ************************************ 00:28:22.347 START TEST nvmf_bdevperf 00:28:22.347 ************************************ 00:28:22.347 12:55:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:22.347 * Looking for test storage... 00:28:22.347 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:22.347 12:55:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:22.347 12:55:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:22.347 12:55:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:22.347 12:55:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:22.347 12:55:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:22.347 12:55:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:22.347 12:55:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:22.347 12:55:55 -- scripts/common.sh@335 -- # IFS=.-: 00:28:22.347 12:55:55 -- scripts/common.sh@335 -- # read -ra ver1 00:28:22.347 12:55:55 -- scripts/common.sh@336 -- # IFS=.-: 00:28:22.347 12:55:55 -- scripts/common.sh@336 -- # read -ra ver2 00:28:22.347 12:55:55 -- scripts/common.sh@337 -- # local 'op=<' 00:28:22.347 12:55:55 -- scripts/common.sh@339 -- # ver1_l=2 00:28:22.347 12:55:55 -- scripts/common.sh@340 -- # ver2_l=1 00:28:22.347 12:55:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:22.347 12:55:55 -- scripts/common.sh@343 -- # case "$op" in 00:28:22.348 12:55:55 -- scripts/common.sh@344 -- # : 1 00:28:22.348 12:55:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:22.348 12:55:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:22.348 12:55:55 -- scripts/common.sh@364 -- # decimal 1 00:28:22.348 12:55:55 -- scripts/common.sh@352 -- # local d=1 00:28:22.348 12:55:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:22.348 12:55:55 -- scripts/common.sh@354 -- # echo 1 00:28:22.348 12:55:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:22.348 12:55:55 -- scripts/common.sh@365 -- # decimal 2 00:28:22.348 12:55:55 -- scripts/common.sh@352 -- # local d=2 00:28:22.348 12:55:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:22.348 12:55:55 -- scripts/common.sh@354 -- # echo 2 00:28:22.348 12:55:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:22.348 12:55:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:22.348 12:55:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:22.348 12:55:55 -- scripts/common.sh@367 -- # return 0 00:28:22.348 12:55:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:22.348 12:55:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.348 --rc genhtml_branch_coverage=1 00:28:22.348 --rc genhtml_function_coverage=1 00:28:22.348 --rc genhtml_legend=1 00:28:22.348 --rc geninfo_all_blocks=1 00:28:22.348 --rc geninfo_unexecuted_blocks=1 00:28:22.348 00:28:22.348 ' 00:28:22.348 12:55:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.348 --rc genhtml_branch_coverage=1 00:28:22.348 --rc genhtml_function_coverage=1 00:28:22.348 --rc genhtml_legend=1 00:28:22.348 --rc geninfo_all_blocks=1 00:28:22.348 --rc geninfo_unexecuted_blocks=1 00:28:22.348 00:28:22.348 ' 00:28:22.348 12:55:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.348 --rc genhtml_branch_coverage=1 00:28:22.348 --rc genhtml_function_coverage=1 00:28:22.348 --rc genhtml_legend=1 00:28:22.348 --rc geninfo_all_blocks=1 00:28:22.348 --rc geninfo_unexecuted_blocks=1 00:28:22.348 00:28:22.348 ' 00:28:22.348 12:55:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.348 --rc genhtml_branch_coverage=1 00:28:22.348 --rc genhtml_function_coverage=1 00:28:22.348 --rc genhtml_legend=1 00:28:22.348 --rc geninfo_all_blocks=1 00:28:22.348 --rc geninfo_unexecuted_blocks=1 00:28:22.348 00:28:22.348 ' 00:28:22.348 12:55:55 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.348 12:55:55 -- nvmf/common.sh@7 -- # uname -s 00:28:22.348 12:55:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.348 12:55:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.348 12:55:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.348 12:55:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.348 12:55:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.348 12:55:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.348 12:55:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.348 12:55:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.348 12:55:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.348 12:55:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.348 12:55:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:22.348 12:55:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:22.348 12:55:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.348 12:55:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.348 12:55:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.348 12:55:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:22.348 12:55:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.348 12:55:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.348 12:55:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.348 12:55:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.348 12:55:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.348 12:55:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.348 12:55:55 -- paths/export.sh@5 -- # export PATH 00:28:22.348 12:55:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.348 12:55:55 -- nvmf/common.sh@46 -- # : 0 00:28:22.348 12:55:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:22.348 12:55:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:22.348 12:55:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:22.348 12:55:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.348 12:55:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.348 12:55:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:22.348 12:55:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:22.348 12:55:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:22.348 12:55:55 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:22.348 12:55:55 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:22.348 12:55:55 -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:22.348 12:55:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:22.348 12:55:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.348 12:55:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:22.348 12:55:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:22.348 12:55:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:22.348 12:55:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.348 12:55:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.348 12:55:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.348 12:55:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:22.348 12:55:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:22.348 12:55:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:22.348 12:55:55 -- common/autotest_common.sh@10 -- # set +x 00:28:30.491 12:56:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:30.491 12:56:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:30.491 12:56:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:30.491 12:56:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:30.491 12:56:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:30.491 12:56:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:30.491 12:56:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:30.491 12:56:02 -- nvmf/common.sh@294 -- # net_devs=() 00:28:30.491 12:56:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:30.491 12:56:02 -- nvmf/common.sh@295 -- # e810=() 00:28:30.491 12:56:02 -- nvmf/common.sh@295 -- # local -ga e810 00:28:30.491 12:56:02 -- nvmf/common.sh@296 -- # x722=() 00:28:30.491 12:56:02 -- nvmf/common.sh@296 -- # local -ga x722 00:28:30.491 12:56:02 -- nvmf/common.sh@297 -- # mlx=() 00:28:30.491 12:56:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:30.491 12:56:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.491 12:56:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:30.491 12:56:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:30.491 12:56:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:30.491 12:56:02 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:30.491 12:56:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:30.491 12:56:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:28:30.491 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:28:30.491 12:56:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:30.491 12:56:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:28:30.491 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:28:30.491 12:56:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:30.491 12:56:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:30.491 12:56:02 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.491 12:56:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:30.491 12:56:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.491 12:56:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:28:30.491 Found net devices under 0000:98:00.0: mlx_0_0 00:28:30.491 12:56:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.491 12:56:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.491 12:56:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:30.491 12:56:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.491 12:56:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:28:30.491 Found net devices under 0000:98:00.1: mlx_0_1 00:28:30.491 12:56:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.491 12:56:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:30.491 12:56:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:30.491 12:56:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:30.491 12:56:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:30.491 12:56:02 -- nvmf/common.sh@57 -- # uname 00:28:30.491 12:56:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:30.491 12:56:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:30.491 12:56:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:30.491 12:56:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:30.491 12:56:02 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:30.491 12:56:02 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:30.491 12:56:02 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:30.491 12:56:02 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:30.491 12:56:02 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:30.491 12:56:02 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:30.491 12:56:02 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:30.491 12:56:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:30.491 12:56:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:30.491 12:56:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:30.491 12:56:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:30.491 12:56:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:30.491 12:56:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:30.491 12:56:02 -- nvmf/common.sh@104 -- # continue 2 00:28:30.491 12:56:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:30.491 12:56:02 -- nvmf/common.sh@104 -- # continue 2 00:28:30.491 12:56:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:30.491 12:56:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:30.491 12:56:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:30.491 12:56:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:30.491 12:56:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:30.491 12:56:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:30.491 12:56:02 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:30.491 12:56:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:30.491 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:30.491 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:28:30.491 altname enp152s0f0np0 00:28:30.491 altname ens817f0np0 00:28:30.491 inet 192.168.100.8/24 scope global mlx_0_0 00:28:30.491 valid_lft forever preferred_lft forever 00:28:30.491 12:56:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:30.491 12:56:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:30.491 12:56:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:30.491 12:56:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:30.491 12:56:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:30.491 12:56:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:30.491 12:56:02 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:30.491 12:56:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:30.491 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:30.491 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:28:30.491 altname enp152s0f1np1 00:28:30.491 altname ens817f1np1 00:28:30.491 inet 192.168.100.9/24 scope global mlx_0_1 00:28:30.491 valid_lft forever preferred_lft forever 00:28:30.491 12:56:02 -- nvmf/common.sh@410 -- # return 0 00:28:30.491 12:56:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:30.491 12:56:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:30.491 12:56:02 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:30.491 12:56:02 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:30.491 12:56:02 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:30.491 12:56:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:30.491 12:56:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:30.491 12:56:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:30.491 12:56:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:30.491 12:56:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:30.491 12:56:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.491 12:56:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:30.492 12:56:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:30.492 12:56:02 -- nvmf/common.sh@104 -- # continue 2 00:28:30.492 12:56:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:30.492 12:56:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.492 12:56:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:30.492 12:56:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.492 12:56:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:30.492 12:56:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:30.492 12:56:02 -- nvmf/common.sh@104 -- # continue 2 00:28:30.492 12:56:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:30.492 12:56:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:30.492 12:56:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:30.492 12:56:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:30.492 12:56:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:30.492 12:56:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:30.492 12:56:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:30.492 12:56:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:30.492 12:56:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:30.492 12:56:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:30.492 12:56:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:30.492 12:56:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:30.492 12:56:02 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:30.492 192.168.100.9' 00:28:30.492 12:56:02 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:30.492 192.168.100.9' 00:28:30.492 12:56:02 -- nvmf/common.sh@445 -- # head -n 1 00:28:30.492 12:56:02 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:30.492 12:56:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:30.492 192.168.100.9' 00:28:30.492 12:56:02 -- nvmf/common.sh@446 -- # tail -n +2 00:28:30.492 12:56:02 -- nvmf/common.sh@446 -- # head -n 1 00:28:30.492 12:56:02 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:30.492 12:56:02 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:30.492 12:56:02 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:30.492 12:56:02 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:30.492 12:56:02 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:30.492 12:56:02 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:30.492 12:56:02 -- host/bdevperf.sh@25 -- # tgt_init 00:28:30.492 12:56:02 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:30.492 12:56:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:30.492 12:56:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:30.492 12:56:02 -- common/autotest_common.sh@10 -- # set +x 00:28:30.492 12:56:02 -- nvmf/common.sh@469 -- # nvmfpid=680608 00:28:30.492 12:56:02 -- nvmf/common.sh@470 -- # waitforlisten 680608 00:28:30.492 12:56:02 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:30.492 12:56:02 -- common/autotest_common.sh@829 -- # '[' -z 680608 ']' 00:28:30.492 12:56:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.492 12:56:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:30.492 12:56:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.492 12:56:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:30.492 12:56:02 -- common/autotest_common.sh@10 -- # set +x 00:28:30.492 [2024-11-20 12:56:02.734857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:30.492 [2024-11-20 12:56:02.734935] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.492 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.492 [2024-11-20 12:56:02.820564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:30.492 [2024-11-20 12:56:02.913156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:30.492 [2024-11-20 12:56:02.913322] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.492 [2024-11-20 12:56:02.913333] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.492 [2024-11-20 12:56:02.913342] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.492 [2024-11-20 12:56:02.913489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:30.492 [2024-11-20 12:56:02.913654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.492 [2024-11-20 12:56:02.913655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:30.492 12:56:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:30.492 12:56:03 -- common/autotest_common.sh@862 -- # return 0 00:28:30.492 12:56:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:30.492 12:56:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:30.492 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:28:30.492 12:56:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.492 12:56:03 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:30.492 12:56:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.492 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:28:30.492 [2024-11-20 12:56:03.593636] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd98fa0/0xd9d490) succeed. 00:28:30.753 [2024-11-20 12:56:03.607698] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd9a4f0/0xddeb30) succeed. 00:28:30.753 12:56:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.753 12:56:03 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:30.753 12:56:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.753 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:28:30.753 Malloc0 00:28:30.753 12:56:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.753 12:56:03 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.753 12:56:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.753 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:28:30.753 12:56:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.753 12:56:03 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:30.753 12:56:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.753 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:28:30.753 12:56:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.753 12:56:03 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:30.753 12:56:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.753 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:28:30.753 [2024-11-20 12:56:03.755703] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:30.753 12:56:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.753 12:56:03 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:30.753 12:56:03 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:30.753 12:56:03 -- nvmf/common.sh@520 -- # config=() 00:28:30.753 12:56:03 -- nvmf/common.sh@520 -- # local subsystem config 00:28:30.753 12:56:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:30.753 12:56:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:30.753 { 00:28:30.753 "params": { 00:28:30.753 "name": "Nvme$subsystem", 00:28:30.753 "trtype": "$TEST_TRANSPORT", 00:28:30.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.753 "adrfam": "ipv4", 00:28:30.753 "trsvcid": "$NVMF_PORT", 00:28:30.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.753 "hdgst": ${hdgst:-false}, 00:28:30.753 "ddgst": ${ddgst:-false} 00:28:30.753 }, 00:28:30.753 "method": "bdev_nvme_attach_controller" 00:28:30.753 } 00:28:30.753 EOF 00:28:30.753 )") 00:28:30.753 12:56:03 -- nvmf/common.sh@542 -- # cat 00:28:30.753 12:56:03 -- nvmf/common.sh@544 -- # jq . 00:28:30.753 12:56:03 -- nvmf/common.sh@545 -- # IFS=, 00:28:30.753 12:56:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:30.753 "params": { 00:28:30.753 "name": "Nvme1", 00:28:30.753 "trtype": "rdma", 00:28:30.753 "traddr": "192.168.100.8", 00:28:30.753 "adrfam": "ipv4", 00:28:30.753 "trsvcid": "4420", 00:28:30.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:30.753 "hdgst": false, 00:28:30.753 "ddgst": false 00:28:30.753 }, 00:28:30.753 "method": "bdev_nvme_attach_controller" 00:28:30.753 }' 00:28:30.753 [2024-11-20 12:56:03.814361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:30.753 [2024-11-20 12:56:03.814409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid680939 ] 00:28:30.753 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.015 [2024-11-20 12:56:03.873292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.015 [2024-11-20 12:56:03.935825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.015 Running I/O for 1 seconds... 00:28:32.400 00:28:32.400 Latency(us) 00:28:32.400 [2024-11-20T11:56:05.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.400 [2024-11-20T11:56:05.508Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:32.400 Verification LBA range: start 0x0 length 0x4000 00:28:32.400 Nvme1n1 : 1.00 20063.21 78.37 0.00 0.00 6344.87 1372.16 13653.33 00:28:32.400 [2024-11-20T11:56:05.508Z] =================================================================================================================== 00:28:32.400 [2024-11-20T11:56:05.508Z] Total : 20063.21 78.37 0.00 0.00 6344.87 1372.16 13653.33 00:28:32.400 12:56:05 -- host/bdevperf.sh@30 -- # bdevperfpid=681274 00:28:32.400 12:56:05 -- host/bdevperf.sh@32 -- # sleep 3 00:28:32.400 12:56:05 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:32.400 12:56:05 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:32.400 12:56:05 -- nvmf/common.sh@520 -- # config=() 00:28:32.400 12:56:05 -- nvmf/common.sh@520 -- # local subsystem config 00:28:32.400 12:56:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:32.400 12:56:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:32.400 { 00:28:32.400 "params": { 00:28:32.400 "name": "Nvme$subsystem", 00:28:32.400 "trtype": "$TEST_TRANSPORT", 00:28:32.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.400 "adrfam": "ipv4", 00:28:32.400 "trsvcid": "$NVMF_PORT", 00:28:32.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.400 "hdgst": ${hdgst:-false}, 00:28:32.400 "ddgst": ${ddgst:-false} 00:28:32.400 }, 00:28:32.400 "method": "bdev_nvme_attach_controller" 00:28:32.400 } 00:28:32.400 EOF 00:28:32.400 )") 00:28:32.400 12:56:05 -- nvmf/common.sh@542 -- # cat 00:28:32.400 12:56:05 -- nvmf/common.sh@544 -- # jq . 00:28:32.400 12:56:05 -- nvmf/common.sh@545 -- # IFS=, 00:28:32.400 12:56:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:32.400 "params": { 00:28:32.400 "name": "Nvme1", 00:28:32.400 "trtype": "rdma", 00:28:32.400 "traddr": "192.168.100.8", 00:28:32.400 "adrfam": "ipv4", 00:28:32.400 "trsvcid": "4420", 00:28:32.400 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:32.400 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:32.400 "hdgst": false, 00:28:32.400 "ddgst": false 00:28:32.400 }, 00:28:32.400 "method": "bdev_nvme_attach_controller" 00:28:32.400 }' 00:28:32.400 [2024-11-20 12:56:05.315632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:32.400 [2024-11-20 12:56:05.315686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681274 ] 00:28:32.400 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.400 [2024-11-20 12:56:05.376352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.400 [2024-11-20 12:56:05.436951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.661 Running I/O for 15 seconds... 00:28:35.205 12:56:08 -- host/bdevperf.sh@33 -- # kill -9 680608 00:28:35.205 12:56:08 -- host/bdevperf.sh@35 -- # sleep 3 00:28:36.590 [2024-11-20 12:56:09.311264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.590 [2024-11-20 12:56:09.311318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.590 [2024-11-20 12:56:09.311339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.591 [2024-11-20 12:56:09.311383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.591 [2024-11-20 12:56:09.311467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f0500 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.591 [2024-11-20 12:56:09.311708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.591 [2024-11-20 12:56:09.311762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182b00 00:28:36.591 [2024-11-20 12:56:09.311795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.591 [2024-11-20 12:56:09.311811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183000 00:28:36.591 [2024-11-20 12:56:09.311828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.591 [2024-11-20 12:56:09.311837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.311844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.311853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.311861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.311870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.311877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.311888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.592 [2024-11-20 12:56:09.311895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.311904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.592 [2024-11-20 12:56:09.311912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.311923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.311930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.311940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.592 [2024-11-20 12:56:09.311947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.311956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.311964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.311973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.311984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.311994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.312035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.312051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.312068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.312085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.312101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.312119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.592 [2024-11-20 12:56:09.312169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.312186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ce480 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.592 [2024-11-20 12:56:09.312237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183000 00:28:36.592 [2024-11-20 12:56:09.312254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.592 [2024-11-20 12:56:09.312287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.592 [2024-11-20 12:56:09.312322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.592 [2024-11-20 12:56:09.312338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182b00 00:28:36.592 [2024-11-20 12:56:09.312388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.592 [2024-11-20 12:56:09.312398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.593 [2024-11-20 12:56:09.312472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.593 [2024-11-20 12:56:09.312542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.593 [2024-11-20 12:56:09.312592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.593 [2024-11-20 12:56:09.312625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.593 [2024-11-20 12:56:09.312692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.593 [2024-11-20 12:56:09.312743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.593 [2024-11-20 12:56:09.312792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.593 [2024-11-20 12:56:09.312826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.593 [2024-11-20 12:56:09.312892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x182b00 00:28:36.593 [2024-11-20 12:56:09.312944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.593 [2024-11-20 12:56:09.312953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183000 00:28:36.593 [2024-11-20 12:56:09.312961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.312971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.312978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.312993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.594 [2024-11-20 12:56:09.313051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183000 00:28:36.594 [2024-11-20 12:56:09.313068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.594 [2024-11-20 12:56:09.313085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.594 [2024-11-20 12:56:09.313136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183000 00:28:36.594 [2024-11-20 12:56:09.313153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.594 [2024-11-20 12:56:09.313186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.594 [2024-11-20 12:56:09.313202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.594 [2024-11-20 12:56:09.313219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.594 [2024-11-20 12:56:09.313252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183000 00:28:36.594 [2024-11-20 12:56:09.313285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.594 [2024-11-20 12:56:09.313336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.594 [2024-11-20 12:56:09.313352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.594 [2024-11-20 12:56:09.313369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183000 00:28:36.594 [2024-11-20 12:56:09.313436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.594 [2024-11-20 12:56:09.313463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x182b00 00:28:36.594 [2024-11-20 12:56:09.313470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:309a3000 sqhd:5310 p:0 m:0 dnr:0 00:28:36.595 [2024-11-20 12:56:09.324348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:36.595 [2024-11-20 12:56:09.324369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:36.595 [2024-11-20 12:56:09.324382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66800 len:8 PRP1 0x0 PRP2 0x0 00:28:36.595 [2024-11-20 12:56:09.324391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.595 [2024-11-20 12:56:09.324426] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:36.595 [2024-11-20 12:56:09.324459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.595 [2024-11-20 12:56:09.324468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.595 [2024-11-20 12:56:09.324477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.595 [2024-11-20 12:56:09.324484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.595 [2024-11-20 12:56:09.324492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.595 [2024-11-20 12:56:09.324500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.595 [2024-11-20 12:56:09.324507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.595 [2024-11-20 12:56:09.324515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.595 [2024-11-20 12:56:09.344506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:36.595 [2024-11-20 12:56:09.344546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:36.595 [2024-11-20 12:56:09.344568] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:36.595 [2024-11-20 12:56:09.347598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:36.595 [2024-11-20 12:56:09.351045] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:36.595 [2024-11-20 12:56:09.351063] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:36.595 [2024-11-20 12:56:09.351070] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:37.535 [2024-11-20 12:56:10.355300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:37.535 [2024-11-20 12:56:10.355357] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:37.535 [2024-11-20 12:56:10.355781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:37.535 [2024-11-20 12:56:10.355806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:37.535 [2024-11-20 12:56:10.355830] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:37.535 [2024-11-20 12:56:10.357218] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.535 [2024-11-20 12:56:10.358084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.535 [2024-11-20 12:56:10.369487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:37.535 [2024-11-20 12:56:10.373203] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:37.535 [2024-11-20 12:56:10.373221] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:37.535 [2024-11-20 12:56:10.373233] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:38.477 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 680608 Killed "${NVMF_APP[@]}" "$@" 00:28:38.477 12:56:11 -- host/bdevperf.sh@36 -- # tgt_init 00:28:38.477 12:56:11 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:38.477 12:56:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:38.477 12:56:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:38.477 12:56:11 -- common/autotest_common.sh@10 -- # set +x 00:28:38.477 12:56:11 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:38.477 12:56:11 -- nvmf/common.sh@469 -- # nvmfpid=682309 00:28:38.477 12:56:11 -- nvmf/common.sh@470 -- # waitforlisten 682309 00:28:38.477 12:56:11 -- common/autotest_common.sh@829 -- # '[' -z 682309 ']' 00:28:38.477 12:56:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.477 12:56:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.477 12:56:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.477 12:56:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.477 12:56:11 -- common/autotest_common.sh@10 -- # set +x 00:28:38.477 [2024-11-20 12:56:11.308164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:38.477 [2024-11-20 12:56:11.308201] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.477 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.477 [2024-11-20 12:56:11.377750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:38.477 [2024-11-20 12:56:11.377772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.477 [2024-11-20 12:56:11.377934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.477 [2024-11-20 12:56:11.377943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.477 [2024-11-20 12:56:11.377951] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:38.477 [2024-11-20 12:56:11.378725] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:38.477 [2024-11-20 12:56:11.379202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:38.477 [2024-11-20 12:56:11.380333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.477 [2024-11-20 12:56:11.391021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.477 [2024-11-20 12:56:11.394149] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:38.477 [2024-11-20 12:56:11.394169] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:38.477 [2024-11-20 12:56:11.394176] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:38.477 [2024-11-20 12:56:11.430919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:38.477 [2024-11-20 12:56:11.431018] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.477 [2024-11-20 12:56:11.431024] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.477 [2024-11-20 12:56:11.431030] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.477 [2024-11-20 12:56:11.431152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.477 [2024-11-20 12:56:11.431308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.477 [2024-11-20 12:56:11.431310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.049 12:56:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.049 12:56:12 -- common/autotest_common.sh@862 -- # return 0 00:28:39.049 12:56:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:39.049 12:56:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:39.049 12:56:12 -- common/autotest_common.sh@10 -- # set +x 00:28:39.309 12:56:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.309 12:56:12 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:39.309 12:56:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.309 12:56:12 -- common/autotest_common.sh@10 -- # set +x 00:28:39.310 [2024-11-20 12:56:12.198897] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c31fa0/0x1c36490) succeed. 00:28:39.310 [2024-11-20 12:56:12.210027] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c334f0/0x1c77b30) succeed. 00:28:39.310 12:56:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.310 12:56:12 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:39.310 12:56:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.310 12:56:12 -- common/autotest_common.sh@10 -- # set +x 00:28:39.310 Malloc0 00:28:39.310 12:56:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.310 12:56:12 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:39.310 12:56:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.310 12:56:12 -- common/autotest_common.sh@10 -- # set +x 00:28:39.310 12:56:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.310 12:56:12 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:39.310 12:56:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.310 12:56:12 -- common/autotest_common.sh@10 -- # set +x 00:28:39.310 12:56:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.310 12:56:12 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:39.310 12:56:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.310 12:56:12 -- common/autotest_common.sh@10 -- # set +x 00:28:39.310 [2024-11-20 12:56:12.339422] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:39.310 12:56:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.310 12:56:12 -- host/bdevperf.sh@38 -- # wait 681274 00:28:39.310 [2024-11-20 12:56:12.398624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:39.310 [2024-11-20 12:56:12.398650] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.310 [2024-11-20 12:56:12.398812] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.310 [2024-11-20 12:56:12.398821] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.310 [2024-11-20 12:56:12.398829] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:39.310 [2024-11-20 12:56:12.400540] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:39.310 [2024-11-20 12:56:12.401064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.310 [2024-11-20 12:56:12.413172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.570 [2024-11-20 12:56:12.461249] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:47.715 00:28:47.715 Latency(us) 00:28:47.715 [2024-11-20T11:56:20.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.715 [2024-11-20T11:56:20.823Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:47.715 Verification LBA range: start 0x0 length 0x4000 00:28:47.715 Nvme1n1 : 15.01 20030.75 78.25 12164.77 0.00 3960.16 440.32 1069547.52 00:28:47.715 [2024-11-20T11:56:20.823Z] =================================================================================================================== 00:28:47.715 [2024-11-20T11:56:20.823Z] Total : 20030.75 78.25 12164.77 0.00 3960.16 440.32 1069547.52 00:28:47.715 12:56:20 -- host/bdevperf.sh@39 -- # sync 00:28:47.715 12:56:20 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:47.715 12:56:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.715 12:56:20 -- common/autotest_common.sh@10 -- # set +x 00:28:47.976 12:56:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.976 12:56:20 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:47.976 12:56:20 -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:47.976 12:56:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:47.976 12:56:20 -- nvmf/common.sh@116 -- # sync 00:28:47.976 12:56:20 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:47.976 12:56:20 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:47.976 12:56:20 -- nvmf/common.sh@119 -- # set +e 00:28:47.976 12:56:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:47.976 12:56:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:47.976 rmmod nvme_rdma 00:28:47.976 rmmod nvme_fabrics 00:28:47.976 12:56:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:47.976 12:56:20 -- nvmf/common.sh@123 -- # set -e 00:28:47.976 12:56:20 -- nvmf/common.sh@124 -- # return 0 00:28:47.976 12:56:20 -- nvmf/common.sh@477 -- # '[' -n 682309 ']' 00:28:47.976 12:56:20 -- nvmf/common.sh@478 -- # killprocess 682309 00:28:47.976 12:56:20 -- common/autotest_common.sh@936 -- # '[' -z 682309 ']' 00:28:47.976 12:56:20 -- common/autotest_common.sh@940 -- # kill -0 682309 00:28:47.976 12:56:20 -- common/autotest_common.sh@941 -- # uname 00:28:47.976 12:56:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:47.976 12:56:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 682309 00:28:47.976 12:56:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:47.976 12:56:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:47.976 12:56:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 682309' 00:28:47.976 killing process with pid 682309 00:28:47.976 12:56:20 -- common/autotest_common.sh@955 -- # kill 682309 00:28:47.976 12:56:20 -- common/autotest_common.sh@960 -- # wait 682309 00:28:48.236 12:56:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:48.236 12:56:21 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:48.236 00:28:48.236 real 0m25.928s 00:28:48.236 user 1m4.108s 00:28:48.236 sys 0m6.521s 00:28:48.236 12:56:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:48.236 12:56:21 -- common/autotest_common.sh@10 -- # set +x 00:28:48.236 ************************************ 00:28:48.236 END TEST nvmf_bdevperf 00:28:48.236 ************************************ 00:28:48.236 12:56:21 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:48.236 12:56:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:48.236 12:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:48.236 12:56:21 -- common/autotest_common.sh@10 -- # set +x 00:28:48.236 ************************************ 00:28:48.236 START TEST nvmf_target_disconnect 00:28:48.236 ************************************ 00:28:48.236 12:56:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:48.236 * Looking for test storage... 00:28:48.236 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:48.236 12:56:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:48.236 12:56:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:48.237 12:56:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:48.498 12:56:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:48.498 12:56:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:48.498 12:56:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:48.498 12:56:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:48.498 12:56:21 -- scripts/common.sh@335 -- # IFS=.-: 00:28:48.498 12:56:21 -- scripts/common.sh@335 -- # read -ra ver1 00:28:48.498 12:56:21 -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.498 12:56:21 -- scripts/common.sh@336 -- # read -ra ver2 00:28:48.498 12:56:21 -- scripts/common.sh@337 -- # local 'op=<' 00:28:48.498 12:56:21 -- scripts/common.sh@339 -- # ver1_l=2 00:28:48.498 12:56:21 -- scripts/common.sh@340 -- # ver2_l=1 00:28:48.498 12:56:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:48.498 12:56:21 -- scripts/common.sh@343 -- # case "$op" in 00:28:48.498 12:56:21 -- scripts/common.sh@344 -- # : 1 00:28:48.498 12:56:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:48.498 12:56:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.498 12:56:21 -- scripts/common.sh@364 -- # decimal 1 00:28:48.498 12:56:21 -- scripts/common.sh@352 -- # local d=1 00:28:48.498 12:56:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.498 12:56:21 -- scripts/common.sh@354 -- # echo 1 00:28:48.498 12:56:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:48.498 12:56:21 -- scripts/common.sh@365 -- # decimal 2 00:28:48.498 12:56:21 -- scripts/common.sh@352 -- # local d=2 00:28:48.498 12:56:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.498 12:56:21 -- scripts/common.sh@354 -- # echo 2 00:28:48.498 12:56:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:48.498 12:56:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:48.498 12:56:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:48.498 12:56:21 -- scripts/common.sh@367 -- # return 0 00:28:48.498 12:56:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.498 12:56:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.498 --rc genhtml_branch_coverage=1 00:28:48.498 --rc genhtml_function_coverage=1 00:28:48.498 --rc genhtml_legend=1 00:28:48.498 --rc geninfo_all_blocks=1 00:28:48.498 --rc geninfo_unexecuted_blocks=1 00:28:48.498 00:28:48.498 ' 00:28:48.498 12:56:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.498 --rc genhtml_branch_coverage=1 00:28:48.498 --rc genhtml_function_coverage=1 00:28:48.498 --rc genhtml_legend=1 00:28:48.498 --rc geninfo_all_blocks=1 00:28:48.498 --rc geninfo_unexecuted_blocks=1 00:28:48.498 00:28:48.498 ' 00:28:48.498 12:56:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.498 --rc genhtml_branch_coverage=1 00:28:48.498 --rc genhtml_function_coverage=1 00:28:48.498 --rc genhtml_legend=1 00:28:48.498 --rc geninfo_all_blocks=1 00:28:48.498 --rc geninfo_unexecuted_blocks=1 00:28:48.498 00:28:48.498 ' 00:28:48.498 12:56:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.498 --rc genhtml_branch_coverage=1 00:28:48.498 --rc genhtml_function_coverage=1 00:28:48.498 --rc genhtml_legend=1 00:28:48.498 --rc geninfo_all_blocks=1 00:28:48.498 --rc geninfo_unexecuted_blocks=1 00:28:48.498 00:28:48.498 ' 00:28:48.498 12:56:21 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.498 12:56:21 -- nvmf/common.sh@7 -- # uname -s 00:28:48.498 12:56:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.498 12:56:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.498 12:56:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.498 12:56:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.498 12:56:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.499 12:56:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.499 12:56:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.499 12:56:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.499 12:56:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.499 12:56:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.499 12:56:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:48.499 12:56:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:48.499 12:56:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.499 12:56:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.499 12:56:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.499 12:56:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:48.499 12:56:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.499 12:56:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.499 12:56:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.499 12:56:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.499 12:56:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.499 12:56:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.499 12:56:21 -- paths/export.sh@5 -- # export PATH 00:28:48.499 12:56:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.499 12:56:21 -- nvmf/common.sh@46 -- # : 0 00:28:48.499 12:56:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:48.499 12:56:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:48.499 12:56:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:48.499 12:56:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.499 12:56:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.499 12:56:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:48.499 12:56:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:48.499 12:56:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:48.499 12:56:21 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:28:48.499 12:56:21 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:48.499 12:56:21 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:48.499 12:56:21 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:48.499 12:56:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:48.499 12:56:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.499 12:56:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:48.499 12:56:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:48.499 12:56:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:48.499 12:56:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.499 12:56:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.499 12:56:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.499 12:56:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:48.499 12:56:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:48.499 12:56:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:48.499 12:56:21 -- common/autotest_common.sh@10 -- # set +x 00:28:56.643 12:56:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:56.643 12:56:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:56.643 12:56:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:56.643 12:56:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:56.643 12:56:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:56.643 12:56:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:56.643 12:56:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:56.643 12:56:28 -- nvmf/common.sh@294 -- # net_devs=() 00:28:56.643 12:56:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:56.643 12:56:28 -- nvmf/common.sh@295 -- # e810=() 00:28:56.643 12:56:28 -- nvmf/common.sh@295 -- # local -ga e810 00:28:56.643 12:56:28 -- nvmf/common.sh@296 -- # x722=() 00:28:56.643 12:56:28 -- nvmf/common.sh@296 -- # local -ga x722 00:28:56.643 12:56:28 -- nvmf/common.sh@297 -- # mlx=() 00:28:56.643 12:56:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:56.643 12:56:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.643 12:56:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:56.643 12:56:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:56.643 12:56:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:56.643 12:56:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:56.643 12:56:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:56.643 12:56:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:56.643 12:56:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:56.643 12:56:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:56.643 12:56:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:28:56.643 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:28:56.643 12:56:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:56.643 12:56:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:56.643 12:56:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:56.643 12:56:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:56.643 12:56:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:56.643 12:56:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:56.643 12:56:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:56.643 12:56:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:28:56.643 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:28:56.643 12:56:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:56.643 12:56:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:56.643 12:56:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:56.643 12:56:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:56.644 12:56:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:56.644 12:56:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.644 12:56:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:56.644 12:56:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.644 12:56:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:28:56.644 Found net devices under 0000:98:00.0: mlx_0_0 00:28:56.644 12:56:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.644 12:56:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.644 12:56:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:56.644 12:56:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.644 12:56:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:28:56.644 Found net devices under 0000:98:00.1: mlx_0_1 00:28:56.644 12:56:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.644 12:56:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:56.644 12:56:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:56.644 12:56:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:56.644 12:56:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:56.644 12:56:28 -- nvmf/common.sh@57 -- # uname 00:28:56.644 12:56:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:56.644 12:56:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:56.644 12:56:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:56.644 12:56:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:56.644 12:56:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:56.644 12:56:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:56.644 12:56:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:56.644 12:56:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:56.644 12:56:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:56.644 12:56:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:56.644 12:56:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:56.644 12:56:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:56.644 12:56:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:56.644 12:56:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:56.644 12:56:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:56.644 12:56:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:56.644 12:56:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:56.644 12:56:28 -- nvmf/common.sh@104 -- # continue 2 00:28:56.644 12:56:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:56.644 12:56:28 -- nvmf/common.sh@104 -- # continue 2 00:28:56.644 12:56:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:56.644 12:56:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:56.644 12:56:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:56.644 12:56:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:56.644 12:56:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:56.644 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:56.644 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:28:56.644 altname enp152s0f0np0 00:28:56.644 altname ens817f0np0 00:28:56.644 inet 192.168.100.8/24 scope global mlx_0_0 00:28:56.644 valid_lft forever preferred_lft forever 00:28:56.644 12:56:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:56.644 12:56:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:56.644 12:56:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:56.644 12:56:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:56.644 12:56:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:56.644 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:56.644 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:28:56.644 altname enp152s0f1np1 00:28:56.644 altname ens817f1np1 00:28:56.644 inet 192.168.100.9/24 scope global mlx_0_1 00:28:56.644 valid_lft forever preferred_lft forever 00:28:56.644 12:56:28 -- nvmf/common.sh@410 -- # return 0 00:28:56.644 12:56:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:56.644 12:56:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:56.644 12:56:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:56.644 12:56:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:56.644 12:56:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:56.644 12:56:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:56.644 12:56:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:56.644 12:56:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:56.644 12:56:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:56.644 12:56:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:56.644 12:56:28 -- nvmf/common.sh@104 -- # continue 2 00:28:56.644 12:56:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.644 12:56:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:56.644 12:56:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:56.644 12:56:28 -- nvmf/common.sh@104 -- # continue 2 00:28:56.644 12:56:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:56.644 12:56:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:56.644 12:56:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:56.644 12:56:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:56.644 12:56:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:56.644 12:56:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:56.644 12:56:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:56.644 12:56:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:56.644 192.168.100.9' 00:28:56.644 12:56:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:56.644 192.168.100.9' 00:28:56.644 12:56:28 -- nvmf/common.sh@445 -- # head -n 1 00:28:56.644 12:56:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:56.644 12:56:28 -- nvmf/common.sh@446 -- # tail -n +2 00:28:56.644 12:56:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:56.644 192.168.100.9' 00:28:56.644 12:56:28 -- nvmf/common.sh@446 -- # head -n 1 00:28:56.644 12:56:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:56.644 12:56:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:56.644 12:56:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:56.644 12:56:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:56.644 12:56:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:56.644 12:56:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:56.644 12:56:28 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:56.644 12:56:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:56.644 12:56:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.644 12:56:28 -- common/autotest_common.sh@10 -- # set +x 00:28:56.644 ************************************ 00:28:56.644 START TEST nvmf_target_disconnect_tc1 00:28:56.644 ************************************ 00:28:56.644 12:56:28 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc1 00:28:56.644 12:56:28 -- host/target_disconnect.sh@32 -- # set +e 00:28:56.644 12:56:28 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:56.644 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.644 [2024-11-20 12:56:28.633927] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:56.644 [2024-11-20 12:56:28.633976] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:56.644 [2024-11-20 12:56:28.633990] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:28:56.644 [2024-11-20 12:56:29.638276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:56.644 [2024-11-20 12:56:29.638297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:56.644 [2024-11-20 12:56:29.638305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:28:56.645 [2024-11-20 12:56:29.638324] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:56.645 [2024-11-20 12:56:29.638331] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:56.645 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:28:56.645 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:56.645 Initializing NVMe Controllers 00:28:56.645 12:56:29 -- host/target_disconnect.sh@33 -- # trap - ERR 00:28:56.645 12:56:29 -- host/target_disconnect.sh@33 -- # print_backtrace 00:28:56.645 12:56:29 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:28:56.645 12:56:29 -- common/autotest_common.sh@1142 -- # return 0 00:28:56.645 12:56:29 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:28:56.645 12:56:29 -- host/target_disconnect.sh@41 -- # set -e 00:28:56.645 00:28:56.645 real 0m1.129s 00:28:56.645 user 0m0.964s 00:28:56.645 sys 0m0.145s 00:28:56.645 12:56:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:56.645 12:56:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.645 ************************************ 00:28:56.645 END TEST nvmf_target_disconnect_tc1 00:28:56.645 ************************************ 00:28:56.645 12:56:29 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:56.645 12:56:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:56.645 12:56:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.645 12:56:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.645 ************************************ 00:28:56.645 START TEST nvmf_target_disconnect_tc2 00:28:56.645 ************************************ 00:28:56.645 12:56:29 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc2 00:28:56.645 12:56:29 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:28:56.645 12:56:29 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:56.645 12:56:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:56.645 12:56:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:56.645 12:56:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.645 12:56:29 -- nvmf/common.sh@469 -- # nvmfpid=688286 00:28:56.645 12:56:29 -- nvmf/common.sh@470 -- # waitforlisten 688286 00:28:56.645 12:56:29 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:56.645 12:56:29 -- common/autotest_common.sh@829 -- # '[' -z 688286 ']' 00:28:56.645 12:56:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.645 12:56:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:56.645 12:56:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.645 12:56:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:56.645 12:56:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.906 [2024-11-20 12:56:29.749825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:56.906 [2024-11-20 12:56:29.749880] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.906 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.906 [2024-11-20 12:56:29.834210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.906 [2024-11-20 12:56:29.926411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:56.906 [2024-11-20 12:56:29.926563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.906 [2024-11-20 12:56:29.926573] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.906 [2024-11-20 12:56:29.926580] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.906 [2024-11-20 12:56:29.926654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:56.906 [2024-11-20 12:56:29.926799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:56.906 [2024-11-20 12:56:29.926959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:56.906 [2024-11-20 12:56:29.926960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:57.476 12:56:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.476 12:56:30 -- common/autotest_common.sh@862 -- # return 0 00:28:57.476 12:56:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:57.476 12:56:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:57.476 12:56:30 -- common/autotest_common.sh@10 -- # set +x 00:28:57.736 12:56:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.736 12:56:30 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:57.736 12:56:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.736 12:56:30 -- common/autotest_common.sh@10 -- # set +x 00:28:57.736 Malloc0 00:28:57.736 12:56:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.736 12:56:30 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:57.736 12:56:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.736 12:56:30 -- common/autotest_common.sh@10 -- # set +x 00:28:57.736 [2024-11-20 12:56:30.665280] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2023b40/0x202f760) succeed. 00:28:57.736 [2024-11-20 12:56:30.681272] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2025130/0x2070e00) succeed. 00:28:57.736 12:56:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.736 12:56:30 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.736 12:56:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.736 12:56:30 -- common/autotest_common.sh@10 -- # set +x 00:28:57.996 12:56:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.996 12:56:30 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:57.996 12:56:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.996 12:56:30 -- common/autotest_common.sh@10 -- # set +x 00:28:57.996 12:56:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.996 12:56:30 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:57.996 12:56:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.996 12:56:30 -- common/autotest_common.sh@10 -- # set +x 00:28:57.996 [2024-11-20 12:56:30.867357] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:57.996 12:56:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.996 12:56:30 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:57.996 12:56:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.996 12:56:30 -- common/autotest_common.sh@10 -- # set +x 00:28:57.996 12:56:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.996 12:56:30 -- host/target_disconnect.sh@50 -- # reconnectpid=688432 00:28:57.996 12:56:30 -- host/target_disconnect.sh@52 -- # sleep 2 00:28:57.996 12:56:30 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:57.996 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.908 12:56:32 -- host/target_disconnect.sh@53 -- # kill -9 688286 00:28:59.908 12:56:32 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Write completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.294 starting I/O failed 00:29:01.294 Read completed with error (sct=0, sc=8) 00:29:01.295 starting I/O failed 00:29:01.295 Read completed with error (sct=0, sc=8) 00:29:01.295 starting I/O failed 00:29:01.295 Read completed with error (sct=0, sc=8) 00:29:01.295 starting I/O failed 00:29:01.295 Write completed with error (sct=0, sc=8) 00:29:01.295 starting I/O failed 00:29:01.295 [2024-11-20 12:56:34.083675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.866 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 688286 Killed "${NVMF_APP[@]}" "$@" 00:29:01.866 12:56:34 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:29:01.866 12:56:34 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:01.866 12:56:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:01.866 12:56:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:01.866 12:56:34 -- common/autotest_common.sh@10 -- # set +x 00:29:01.866 12:56:34 -- nvmf/common.sh@469 -- # nvmfpid=689272 00:29:01.866 12:56:34 -- nvmf/common.sh@470 -- # waitforlisten 689272 00:29:01.866 12:56:34 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:01.866 12:56:34 -- common/autotest_common.sh@829 -- # '[' -z 689272 ']' 00:29:01.866 12:56:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.866 12:56:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.866 12:56:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.866 12:56:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.866 12:56:34 -- common/autotest_common.sh@10 -- # set +x 00:29:01.866 [2024-11-20 12:56:34.948379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:01.866 [2024-11-20 12:56:34.948434] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.127 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.127 [2024-11-20 12:56:35.026103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.127 [2024-11-20 12:56:35.079070] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:02.127 [2024-11-20 12:56:35.079164] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.127 [2024-11-20 12:56:35.079170] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.127 [2024-11-20 12:56:35.079176] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.127 [2024-11-20 12:56:35.079341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:02.127 [2024-11-20 12:56:35.079493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:02.127 [2024-11-20 12:56:35.079643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:02.127 [2024-11-20 12:56:35.079645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:02.127 Read completed with error (sct=0, sc=8) 00:29:02.127 starting I/O failed 00:29:02.127 Read completed with error (sct=0, sc=8) 00:29:02.127 starting I/O failed 00:29:02.127 Write completed with error (sct=0, sc=8) 00:29:02.127 starting I/O failed 00:29:02.127 Write completed with error (sct=0, sc=8) 00:29:02.127 starting I/O failed 00:29:02.127 Write completed with error (sct=0, sc=8) 00:29:02.127 starting I/O failed 00:29:02.127 Write completed with error (sct=0, sc=8) 00:29:02.127 starting I/O failed 00:29:02.127 Read completed with error (sct=0, sc=8) 00:29:02.127 starting I/O failed 00:29:02.127 Read completed with error (sct=0, sc=8) 00:29:02.127 starting I/O failed 00:29:02.127 Write completed with error (sct=0, sc=8) 00:29:02.127 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Write completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Write completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Write completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Write completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Write completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Write completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Write completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Write completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Write completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Write completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 Read completed with error (sct=0, sc=8) 00:29:02.128 starting I/O failed 00:29:02.128 [2024-11-20 12:56:35.089306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.128 [2024-11-20 12:56:35.091940] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:02.128 [2024-11-20 12:56:35.091954] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:02.128 [2024-11-20 12:56:35.091959] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:02.698 12:56:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.698 12:56:35 -- common/autotest_common.sh@862 -- # return 0 00:29:02.698 12:56:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:02.698 12:56:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:02.698 12:56:35 -- common/autotest_common.sh@10 -- # set +x 00:29:02.698 12:56:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.698 12:56:35 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:02.698 12:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.698 12:56:35 -- common/autotest_common.sh@10 -- # set +x 00:29:02.698 Malloc0 00:29:02.698 12:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.698 12:56:35 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:02.698 12:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.698 12:56:35 -- common/autotest_common.sh@10 -- # set +x 00:29:02.958 [2024-11-20 12:56:35.829357] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24bfb40/0x24cb760) succeed. 00:29:02.958 [2024-11-20 12:56:35.841751] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24c1130/0x250ce00) succeed. 00:29:02.958 12:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.958 12:56:35 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:02.958 12:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.958 12:56:35 -- common/autotest_common.sh@10 -- # set +x 00:29:02.958 12:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.959 12:56:35 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:02.959 12:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.959 12:56:35 -- common/autotest_common.sh@10 -- # set +x 00:29:02.959 12:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.959 12:56:35 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:02.959 12:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.959 12:56:35 -- common/autotest_common.sh@10 -- # set +x 00:29:02.959 [2024-11-20 12:56:35.974034] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:02.959 12:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.959 12:56:35 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:02.959 12:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.959 12:56:35 -- common/autotest_common.sh@10 -- # set +x 00:29:02.959 12:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.959 12:56:35 -- host/target_disconnect.sh@58 -- # wait 688432 00:29:03.220 [2024-11-20 12:56:36.096435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.220 qpair failed and we were unable to recover it. 00:29:03.220 [2024-11-20 12:56:36.105039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.220 [2024-11-20 12:56:36.105083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.220 [2024-11-20 12:56:36.105096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.220 [2024-11-20 12:56:36.105102] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.220 [2024-11-20 12:56:36.105107] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.220 [2024-11-20 12:56:36.114368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.220 qpair failed and we were unable to recover it. 00:29:03.220 [2024-11-20 12:56:36.125207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.220 [2024-11-20 12:56:36.125239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.220 [2024-11-20 12:56:36.125250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.220 [2024-11-20 12:56:36.125256] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.220 [2024-11-20 12:56:36.125261] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.220 [2024-11-20 12:56:36.134535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.220 qpair failed and we were unable to recover it. 00:29:03.220 [2024-11-20 12:56:36.145137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.220 [2024-11-20 12:56:36.145163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.220 [2024-11-20 12:56:36.145173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.220 [2024-11-20 12:56:36.145179] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.220 [2024-11-20 12:56:36.145183] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.220 [2024-11-20 12:56:36.154605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.220 qpair failed and we were unable to recover it. 00:29:03.220 [2024-11-20 12:56:36.164404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.220 [2024-11-20 12:56:36.164438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.220 [2024-11-20 12:56:36.164448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.220 [2024-11-20 12:56:36.164453] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.220 [2024-11-20 12:56:36.164458] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.220 [2024-11-20 12:56:36.174432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.220 qpair failed and we were unable to recover it. 00:29:03.220 [2024-11-20 12:56:36.185211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.220 [2024-11-20 12:56:36.185244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.220 [2024-11-20 12:56:36.185254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.220 [2024-11-20 12:56:36.185259] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.220 [2024-11-20 12:56:36.185264] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.220 [2024-11-20 12:56:36.194615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.220 qpair failed and we were unable to recover it. 00:29:03.220 [2024-11-20 12:56:36.205229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.220 [2024-11-20 12:56:36.205257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.220 [2024-11-20 12:56:36.205266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.220 [2024-11-20 12:56:36.205271] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.220 [2024-11-20 12:56:36.205276] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.220 [2024-11-20 12:56:36.214508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.220 qpair failed and we were unable to recover it. 00:29:03.220 [2024-11-20 12:56:36.225247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.220 [2024-11-20 12:56:36.225273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.220 [2024-11-20 12:56:36.225283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.220 [2024-11-20 12:56:36.225288] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.220 [2024-11-20 12:56:36.225292] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.220 [2024-11-20 12:56:36.234790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.221 qpair failed and we were unable to recover it. 00:29:03.221 [2024-11-20 12:56:36.244913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.221 [2024-11-20 12:56:36.244942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.221 [2024-11-20 12:56:36.244952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.221 [2024-11-20 12:56:36.244957] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.221 [2024-11-20 12:56:36.244962] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.221 [2024-11-20 12:56:36.254652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.221 qpair failed and we were unable to recover it. 00:29:03.221 [2024-11-20 12:56:36.265539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.221 [2024-11-20 12:56:36.265576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.221 [2024-11-20 12:56:36.265599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.221 [2024-11-20 12:56:36.265605] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.221 [2024-11-20 12:56:36.265610] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.221 [2024-11-20 12:56:36.274863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.221 qpair failed and we were unable to recover it. 00:29:03.221 [2024-11-20 12:56:36.285520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.221 [2024-11-20 12:56:36.285552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.221 [2024-11-20 12:56:36.285563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.221 [2024-11-20 12:56:36.285569] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.221 [2024-11-20 12:56:36.285574] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.221 [2024-11-20 12:56:36.294638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.221 qpair failed and we were unable to recover it. 00:29:03.221 [2024-11-20 12:56:36.305661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.221 [2024-11-20 12:56:36.305687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.221 [2024-11-20 12:56:36.305697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.221 [2024-11-20 12:56:36.305703] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.221 [2024-11-20 12:56:36.305707] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.221 [2024-11-20 12:56:36.314675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.221 qpair failed and we were unable to recover it. 00:29:03.537 [2024-11-20 12:56:36.325348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.537 [2024-11-20 12:56:36.325379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.537 [2024-11-20 12:56:36.325389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.537 [2024-11-20 12:56:36.325394] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.537 [2024-11-20 12:56:36.325400] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.537 [2024-11-20 12:56:36.335032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.537 qpair failed and we were unable to recover it. 00:29:03.537 [2024-11-20 12:56:36.345007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.537 [2024-11-20 12:56:36.345044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.537 [2024-11-20 12:56:36.345058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.537 [2024-11-20 12:56:36.345063] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.537 [2024-11-20 12:56:36.345071] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.537 [2024-11-20 12:56:36.354739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.537 qpair failed and we were unable to recover it. 00:29:03.537 [2024-11-20 12:56:36.364954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.537 [2024-11-20 12:56:36.364991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.537 [2024-11-20 12:56:36.365001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.537 [2024-11-20 12:56:36.365006] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.537 [2024-11-20 12:56:36.365011] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.537 [2024-11-20 12:56:36.375028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.537 qpair failed and we were unable to recover it. 00:29:03.537 [2024-11-20 12:56:36.386008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.537 [2024-11-20 12:56:36.386034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.537 [2024-11-20 12:56:36.386043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.537 [2024-11-20 12:56:36.386048] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.537 [2024-11-20 12:56:36.386053] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.537 [2024-11-20 12:56:36.395055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.537 qpair failed and we were unable to recover it. 00:29:03.537 [2024-11-20 12:56:36.405501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.537 [2024-11-20 12:56:36.405530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.537 [2024-11-20 12:56:36.405539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.537 [2024-11-20 12:56:36.405544] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.537 [2024-11-20 12:56:36.405549] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.537 [2024-11-20 12:56:36.415139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.537 qpair failed and we were unable to recover it. 00:29:03.537 [2024-11-20 12:56:36.425744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.537 [2024-11-20 12:56:36.425776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.537 [2024-11-20 12:56:36.425786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.537 [2024-11-20 12:56:36.425790] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.537 [2024-11-20 12:56:36.425795] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.537 [2024-11-20 12:56:36.434977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.537 qpair failed and we were unable to recover it. 00:29:03.537 [2024-11-20 12:56:36.445218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.537 [2024-11-20 12:56:36.445242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.537 [2024-11-20 12:56:36.445252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.537 [2024-11-20 12:56:36.445257] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.537 [2024-11-20 12:56:36.445261] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.537 [2024-11-20 12:56:36.455223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.537 qpair failed and we were unable to recover it. 00:29:03.537 [2024-11-20 12:56:36.465612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.537 [2024-11-20 12:56:36.465637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.538 [2024-11-20 12:56:36.465647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.538 [2024-11-20 12:56:36.465652] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.538 [2024-11-20 12:56:36.465656] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.538 [2024-11-20 12:56:36.475195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.538 qpair failed and we were unable to recover it. 00:29:03.538 [2024-11-20 12:56:36.484797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.538 [2024-11-20 12:56:36.484824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.538 [2024-11-20 12:56:36.484834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.538 [2024-11-20 12:56:36.484839] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.538 [2024-11-20 12:56:36.484843] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.538 [2024-11-20 12:56:36.495063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.538 qpair failed and we were unable to recover it. 00:29:03.538 [2024-11-20 12:56:36.506048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.538 [2024-11-20 12:56:36.506078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.538 [2024-11-20 12:56:36.506087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.538 [2024-11-20 12:56:36.506092] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.538 [2024-11-20 12:56:36.506097] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.538 [2024-11-20 12:56:36.515358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.538 qpair failed and we were unable to recover it. 00:29:03.538 [2024-11-20 12:56:36.525817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.538 [2024-11-20 12:56:36.525849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.538 [2024-11-20 12:56:36.525858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.538 [2024-11-20 12:56:36.525866] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.538 [2024-11-20 12:56:36.525870] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.538 [2024-11-20 12:56:36.535522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.538 qpair failed and we were unable to recover it. 00:29:03.538 [2024-11-20 12:56:36.545213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.538 [2024-11-20 12:56:36.545243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.538 [2024-11-20 12:56:36.545252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.538 [2024-11-20 12:56:36.545257] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.538 [2024-11-20 12:56:36.545261] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.538 [2024-11-20 12:56:36.555285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.538 qpair failed and we were unable to recover it. 00:29:03.538 [2024-11-20 12:56:36.565482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.538 [2024-11-20 12:56:36.565512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.538 [2024-11-20 12:56:36.565522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.538 [2024-11-20 12:56:36.565526] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.538 [2024-11-20 12:56:36.565531] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.538 [2024-11-20 12:56:36.575272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.538 qpair failed and we were unable to recover it. 00:29:03.538 [2024-11-20 12:56:36.585962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.538 [2024-11-20 12:56:36.585995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.538 [2024-11-20 12:56:36.586005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.538 [2024-11-20 12:56:36.586010] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.538 [2024-11-20 12:56:36.586014] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.538 [2024-11-20 12:56:36.595523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.538 qpair failed and we were unable to recover it. 00:29:03.538 [2024-11-20 12:56:36.606290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.538 [2024-11-20 12:56:36.606318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.538 [2024-11-20 12:56:36.606328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.538 [2024-11-20 12:56:36.606332] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.538 [2024-11-20 12:56:36.606337] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.538 [2024-11-20 12:56:36.615557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.538 qpair failed and we were unable to recover it. 00:29:03.538 [2024-11-20 12:56:36.626086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.538 [2024-11-20 12:56:36.626115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.538 [2024-11-20 12:56:36.626124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.538 [2024-11-20 12:56:36.626129] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.538 [2024-11-20 12:56:36.626134] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.538 [2024-11-20 12:56:36.635710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.538 qpair failed and we were unable to recover it. 00:29:03.809 [2024-11-20 12:56:36.646032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.809 [2024-11-20 12:56:36.646060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.809 [2024-11-20 12:56:36.646070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.809 [2024-11-20 12:56:36.646075] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.809 [2024-11-20 12:56:36.646079] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.809 [2024-11-20 12:56:36.655792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.809 qpair failed and we were unable to recover it. 00:29:03.809 [2024-11-20 12:56:36.666565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.809 [2024-11-20 12:56:36.666597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.809 [2024-11-20 12:56:36.666607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.809 [2024-11-20 12:56:36.666612] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.809 [2024-11-20 12:56:36.666616] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.809 [2024-11-20 12:56:36.675994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.809 qpair failed and we were unable to recover it. 00:29:03.809 [2024-11-20 12:56:36.686003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.809 [2024-11-20 12:56:36.686028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.809 [2024-11-20 12:56:36.686037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.809 [2024-11-20 12:56:36.686042] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.809 [2024-11-20 12:56:36.686047] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.809 [2024-11-20 12:56:36.695815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.809 qpair failed and we were unable to recover it. 00:29:03.809 [2024-11-20 12:56:36.706693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.809 [2024-11-20 12:56:36.706718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.809 [2024-11-20 12:56:36.706731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.809 [2024-11-20 12:56:36.706736] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.809 [2024-11-20 12:56:36.706740] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.809 [2024-11-20 12:56:36.715965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.809 qpair failed and we were unable to recover it. 00:29:03.809 [2024-11-20 12:56:36.726350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.809 [2024-11-20 12:56:36.726378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.809 [2024-11-20 12:56:36.726388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.809 [2024-11-20 12:56:36.726393] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.809 [2024-11-20 12:56:36.726397] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.809 [2024-11-20 12:56:36.736628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.809 qpair failed and we were unable to recover it. 00:29:03.809 [2024-11-20 12:56:36.746725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.809 [2024-11-20 12:56:36.746755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.809 [2024-11-20 12:56:36.746764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.809 [2024-11-20 12:56:36.746769] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.809 [2024-11-20 12:56:36.746773] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.809 [2024-11-20 12:56:36.756082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.809 qpair failed and we were unable to recover it. 00:29:03.809 [2024-11-20 12:56:36.766327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.809 [2024-11-20 12:56:36.766355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.809 [2024-11-20 12:56:36.766365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.809 [2024-11-20 12:56:36.766369] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.809 [2024-11-20 12:56:36.766374] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.809 [2024-11-20 12:56:36.776278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.809 qpair failed and we were unable to recover it. 00:29:03.809 [2024-11-20 12:56:36.786778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.810 [2024-11-20 12:56:36.786803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.810 [2024-11-20 12:56:36.786812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.810 [2024-11-20 12:56:36.786817] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.810 [2024-11-20 12:56:36.786824] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.810 [2024-11-20 12:56:36.796162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.810 qpair failed and we were unable to recover it. 00:29:03.810 [2024-11-20 12:56:36.806594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.810 [2024-11-20 12:56:36.806621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.810 [2024-11-20 12:56:36.806630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.810 [2024-11-20 12:56:36.806635] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.810 [2024-11-20 12:56:36.806639] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.810 [2024-11-20 12:56:36.816444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.810 qpair failed and we were unable to recover it. 00:29:03.810 [2024-11-20 12:56:36.826822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.810 [2024-11-20 12:56:36.826857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.810 [2024-11-20 12:56:36.826866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.810 [2024-11-20 12:56:36.826871] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.810 [2024-11-20 12:56:36.826876] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.810 [2024-11-20 12:56:36.836202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.810 qpair failed and we were unable to recover it. 00:29:03.810 [2024-11-20 12:56:36.846829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.810 [2024-11-20 12:56:36.846864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.810 [2024-11-20 12:56:36.846873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.810 [2024-11-20 12:56:36.846877] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.810 [2024-11-20 12:56:36.846882] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.810 [2024-11-20 12:56:36.856274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.810 qpair failed and we were unable to recover it. 00:29:03.810 [2024-11-20 12:56:36.867251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.810 [2024-11-20 12:56:36.867284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.810 [2024-11-20 12:56:36.867293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.810 [2024-11-20 12:56:36.867298] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.810 [2024-11-20 12:56:36.867302] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.810 [2024-11-20 12:56:36.876372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.810 qpair failed and we were unable to recover it. 00:29:03.810 [2024-11-20 12:56:36.886632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.810 [2024-11-20 12:56:36.886661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.810 [2024-11-20 12:56:36.886670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.810 [2024-11-20 12:56:36.886675] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.810 [2024-11-20 12:56:36.886680] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:03.810 [2024-11-20 12:56:36.896509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.810 qpair failed and we were unable to recover it. 00:29:03.810 [2024-11-20 12:56:36.906264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.810 [2024-11-20 12:56:36.906294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.810 [2024-11-20 12:56:36.906303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.810 [2024-11-20 12:56:36.906308] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.810 [2024-11-20 12:56:36.906312] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.091 [2024-11-20 12:56:36.916459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.091 qpair failed and we were unable to recover it. 00:29:04.091 [2024-11-20 12:56:36.927374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.091 [2024-11-20 12:56:36.927405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.091 [2024-11-20 12:56:36.927415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.091 [2024-11-20 12:56:36.927420] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.091 [2024-11-20 12:56:36.927424] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.091 [2024-11-20 12:56:36.936858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.091 qpair failed and we were unable to recover it. 00:29:04.091 [2024-11-20 12:56:36.947372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.091 [2024-11-20 12:56:36.947402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.091 [2024-11-20 12:56:36.947411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.091 [2024-11-20 12:56:36.947416] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.091 [2024-11-20 12:56:36.947420] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.091 [2024-11-20 12:56:36.956665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.091 qpair failed and we were unable to recover it. 00:29:04.091 [2024-11-20 12:56:36.966928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.091 [2024-11-20 12:56:36.966962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.091 [2024-11-20 12:56:36.966971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.091 [2024-11-20 12:56:36.966985] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.091 [2024-11-20 12:56:36.966990] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.091 [2024-11-20 12:56:36.976570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.091 qpair failed and we were unable to recover it. 00:29:04.091 [2024-11-20 12:56:36.987480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.091 [2024-11-20 12:56:36.987510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.091 [2024-11-20 12:56:36.987520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.091 [2024-11-20 12:56:36.987525] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.091 [2024-11-20 12:56:36.987529] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.091 [2024-11-20 12:56:36.996751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.091 qpair failed and we were unable to recover it. 00:29:04.091 [2024-11-20 12:56:37.007216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.091 [2024-11-20 12:56:37.007244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.091 [2024-11-20 12:56:37.007253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.091 [2024-11-20 12:56:37.007258] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.091 [2024-11-20 12:56:37.007263] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.091 [2024-11-20 12:56:37.016612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.091 qpair failed and we were unable to recover it. 00:29:04.091 [2024-11-20 12:56:37.027726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.091 [2024-11-20 12:56:37.027760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.091 [2024-11-20 12:56:37.027780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.091 [2024-11-20 12:56:37.027786] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.091 [2024-11-20 12:56:37.027791] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.091 [2024-11-20 12:56:37.037190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.091 qpair failed and we were unable to recover it. 00:29:04.091 [2024-11-20 12:56:37.047263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.091 [2024-11-20 12:56:37.047293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.091 [2024-11-20 12:56:37.047303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.091 [2024-11-20 12:56:37.047308] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.091 [2024-11-20 12:56:37.047313] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.091 [2024-11-20 12:56:37.057135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.091 qpair failed and we were unable to recover it. 00:29:04.091 [2024-11-20 12:56:37.067290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.091 [2024-11-20 12:56:37.067325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.091 [2024-11-20 12:56:37.067335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.091 [2024-11-20 12:56:37.067340] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.091 [2024-11-20 12:56:37.067344] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.091 [2024-11-20 12:56:37.077047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.091 qpair failed and we were unable to recover it. 00:29:04.091 [2024-11-20 12:56:37.087642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.092 [2024-11-20 12:56:37.087669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.092 [2024-11-20 12:56:37.087678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.092 [2024-11-20 12:56:37.087683] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.092 [2024-11-20 12:56:37.087687] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.092 [2024-11-20 12:56:37.097090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.092 qpair failed and we were unable to recover it. 00:29:04.092 [2024-11-20 12:56:37.107731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.092 [2024-11-20 12:56:37.107758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.092 [2024-11-20 12:56:37.107768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.092 [2024-11-20 12:56:37.107772] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.092 [2024-11-20 12:56:37.107777] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.092 [2024-11-20 12:56:37.117006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.092 qpair failed and we were unable to recover it. 00:29:04.092 [2024-11-20 12:56:37.127604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.092 [2024-11-20 12:56:37.127632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.092 [2024-11-20 12:56:37.127641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.092 [2024-11-20 12:56:37.127646] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.092 [2024-11-20 12:56:37.127651] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.092 [2024-11-20 12:56:37.137278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.092 qpair failed and we were unable to recover it. 00:29:04.092 [2024-11-20 12:56:37.147909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.092 [2024-11-20 12:56:37.147939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.092 [2024-11-20 12:56:37.147951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.092 [2024-11-20 12:56:37.147956] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.092 [2024-11-20 12:56:37.147960] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.092 [2024-11-20 12:56:37.157283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.092 qpair failed and we were unable to recover it. 00:29:04.092 [2024-11-20 12:56:37.168438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.092 [2024-11-20 12:56:37.168469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.092 [2024-11-20 12:56:37.168488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.092 [2024-11-20 12:56:37.168494] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.092 [2024-11-20 12:56:37.168499] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.092 [2024-11-20 12:56:37.177305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.092 qpair failed and we were unable to recover it. 00:29:04.092 [2024-11-20 12:56:37.187593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.092 [2024-11-20 12:56:37.187623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.092 [2024-11-20 12:56:37.187633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.092 [2024-11-20 12:56:37.187639] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.092 [2024-11-20 12:56:37.187643] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.197383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.207690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.207718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.207728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.207733] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.391 [2024-11-20 12:56:37.207738] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.217390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.228472] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.228507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.228526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.228532] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.391 [2024-11-20 12:56:37.228540] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.237594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.248422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.248455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.248475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.248481] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.391 [2024-11-20 12:56:37.248486] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.258036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.268591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.268618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.268638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.268644] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.391 [2024-11-20 12:56:37.268649] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.277684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.288021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.288052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.288063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.288068] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.391 [2024-11-20 12:56:37.288072] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.297594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.308417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.308454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.308464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.308469] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.391 [2024-11-20 12:56:37.308473] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.317563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.328514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.328549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.328559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.328564] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.391 [2024-11-20 12:56:37.328568] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.337507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.348628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.348654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.348664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.348669] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.391 [2024-11-20 12:56:37.348673] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.357785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.368203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.368232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.368242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.368246] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.391 [2024-11-20 12:56:37.368251] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.377816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.388628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.388662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.388672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.388677] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.391 [2024-11-20 12:56:37.388681] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.391 [2024-11-20 12:56:37.397948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.391 qpair failed and we were unable to recover it. 00:29:04.391 [2024-11-20 12:56:37.408618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.391 [2024-11-20 12:56:37.408648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.391 [2024-11-20 12:56:37.408657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.391 [2024-11-20 12:56:37.408666] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.392 [2024-11-20 12:56:37.408671] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.392 [2024-11-20 12:56:37.417998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.392 qpair failed and we were unable to recover it. 00:29:04.392 [2024-11-20 12:56:37.428651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.392 [2024-11-20 12:56:37.428676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.392 [2024-11-20 12:56:37.428685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.392 [2024-11-20 12:56:37.428690] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.392 [2024-11-20 12:56:37.428695] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.392 [2024-11-20 12:56:37.437850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.392 qpair failed and we were unable to recover it. 00:29:04.392 [2024-11-20 12:56:37.448544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.392 [2024-11-20 12:56:37.448571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.392 [2024-11-20 12:56:37.448581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.392 [2024-11-20 12:56:37.448586] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.392 [2024-11-20 12:56:37.448590] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.392 [2024-11-20 12:56:37.458087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.392 qpair failed and we were unable to recover it. 00:29:04.392 [2024-11-20 12:56:37.468604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.392 [2024-11-20 12:56:37.468636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.392 [2024-11-20 12:56:37.468646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.392 [2024-11-20 12:56:37.468650] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.392 [2024-11-20 12:56:37.468655] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.392 [2024-11-20 12:56:37.478310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.392 qpair failed and we were unable to recover it. 00:29:04.660 [2024-11-20 12:56:37.488936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.660 [2024-11-20 12:56:37.488962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.660 [2024-11-20 12:56:37.488971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.660 [2024-11-20 12:56:37.488976] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.660 [2024-11-20 12:56:37.488980] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.660 [2024-11-20 12:56:37.498164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.660 qpair failed and we were unable to recover it. 00:29:04.660 [2024-11-20 12:56:37.508781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.660 [2024-11-20 12:56:37.508810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.660 [2024-11-20 12:56:37.508820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.660 [2024-11-20 12:56:37.508825] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.660 [2024-11-20 12:56:37.508829] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.660 [2024-11-20 12:56:37.518290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.660 qpair failed and we were unable to recover it. 00:29:04.660 [2024-11-20 12:56:37.528584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.660 [2024-11-20 12:56:37.528614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.660 [2024-11-20 12:56:37.528623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.660 [2024-11-20 12:56:37.528628] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.660 [2024-11-20 12:56:37.528632] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.660 [2024-11-20 12:56:37.538377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.660 qpair failed and we were unable to recover it. 00:29:04.660 [2024-11-20 12:56:37.549229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.660 [2024-11-20 12:56:37.549259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.660 [2024-11-20 12:56:37.549268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.660 [2024-11-20 12:56:37.549273] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.660 [2024-11-20 12:56:37.549277] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.660 [2024-11-20 12:56:37.558425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.660 qpair failed and we were unable to recover it. 00:29:04.660 [2024-11-20 12:56:37.569410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.661 [2024-11-20 12:56:37.569436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.661 [2024-11-20 12:56:37.569445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.661 [2024-11-20 12:56:37.569450] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.661 [2024-11-20 12:56:37.569454] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.661 [2024-11-20 12:56:37.578288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.661 qpair failed and we were unable to recover it. 00:29:04.661 [2024-11-20 12:56:37.589380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.661 [2024-11-20 12:56:37.589412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.661 [2024-11-20 12:56:37.589435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.661 [2024-11-20 12:56:37.589441] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.661 [2024-11-20 12:56:37.589445] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.661 [2024-11-20 12:56:37.598394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.661 qpair failed and we were unable to recover it. 00:29:04.661 [2024-11-20 12:56:37.608805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.661 [2024-11-20 12:56:37.608836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.661 [2024-11-20 12:56:37.608846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.661 [2024-11-20 12:56:37.608852] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.661 [2024-11-20 12:56:37.608856] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.661 [2024-11-20 12:56:37.618714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.661 qpair failed and we were unable to recover it. 00:29:04.661 [2024-11-20 12:56:37.629529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.661 [2024-11-20 12:56:37.629563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.661 [2024-11-20 12:56:37.629582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.661 [2024-11-20 12:56:37.629588] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.661 [2024-11-20 12:56:37.629594] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.661 [2024-11-20 12:56:37.638596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.661 qpair failed and we were unable to recover it. 00:29:04.661 [2024-11-20 12:56:37.649445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.661 [2024-11-20 12:56:37.649474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.661 [2024-11-20 12:56:37.649485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.661 [2024-11-20 12:56:37.649490] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.661 [2024-11-20 12:56:37.649495] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.661 [2024-11-20 12:56:37.658824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.661 qpair failed and we were unable to recover it. 00:29:04.661 [2024-11-20 12:56:37.669430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.661 [2024-11-20 12:56:37.669460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.661 [2024-11-20 12:56:37.669470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.661 [2024-11-20 12:56:37.669475] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.661 [2024-11-20 12:56:37.669479] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.661 [2024-11-20 12:56:37.678901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.661 qpair failed and we were unable to recover it. 00:29:04.661 [2024-11-20 12:56:37.689134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.661 [2024-11-20 12:56:37.689161] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.661 [2024-11-20 12:56:37.689171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.661 [2024-11-20 12:56:37.689176] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.661 [2024-11-20 12:56:37.689180] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.661 [2024-11-20 12:56:37.698739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.661 qpair failed and we were unable to recover it. 00:29:04.661 [2024-11-20 12:56:37.709432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.661 [2024-11-20 12:56:37.709464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.661 [2024-11-20 12:56:37.709473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.661 [2024-11-20 12:56:37.709478] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.661 [2024-11-20 12:56:37.709482] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.661 [2024-11-20 12:56:37.719034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.661 qpair failed and we were unable to recover it. 00:29:04.661 [2024-11-20 12:56:37.729379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.661 [2024-11-20 12:56:37.729407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.661 [2024-11-20 12:56:37.729417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.661 [2024-11-20 12:56:37.729422] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.661 [2024-11-20 12:56:37.729426] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.661 [2024-11-20 12:56:37.739130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.661 qpair failed and we were unable to recover it. 00:29:04.661 [2024-11-20 12:56:37.749716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.661 [2024-11-20 12:56:37.749741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.661 [2024-11-20 12:56:37.749751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.661 [2024-11-20 12:56:37.749755] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.661 [2024-11-20 12:56:37.749760] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.661 [2024-11-20 12:56:37.759192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.661 qpair failed and we were unable to recover it. 00:29:04.939 [2024-11-20 12:56:37.769370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.939 [2024-11-20 12:56:37.769402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.939 [2024-11-20 12:56:37.769411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.939 [2024-11-20 12:56:37.769416] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.939 [2024-11-20 12:56:37.769420] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.939 [2024-11-20 12:56:37.779038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-11-20 12:56:37.789532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.939 [2024-11-20 12:56:37.789562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.939 [2024-11-20 12:56:37.789572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.939 [2024-11-20 12:56:37.789576] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.939 [2024-11-20 12:56:37.789581] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.939 [2024-11-20 12:56:37.799158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-11-20 12:56:37.809829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.939 [2024-11-20 12:56:37.809855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.939 [2024-11-20 12:56:37.809864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.939 [2024-11-20 12:56:37.809869] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.939 [2024-11-20 12:56:37.809873] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.939 [2024-11-20 12:56:37.819212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-11-20 12:56:37.829471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.939 [2024-11-20 12:56:37.829501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.939 [2024-11-20 12:56:37.829510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.939 [2024-11-20 12:56:37.829515] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.939 [2024-11-20 12:56:37.829519] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.939 [2024-11-20 12:56:37.839273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-11-20 12:56:37.849443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.940 [2024-11-20 12:56:37.849472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.940 [2024-11-20 12:56:37.849482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.940 [2024-11-20 12:56:37.849487] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.940 [2024-11-20 12:56:37.849494] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.940 [2024-11-20 12:56:37.859211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-11-20 12:56:37.870073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.940 [2024-11-20 12:56:37.870103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.940 [2024-11-20 12:56:37.870117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.940 [2024-11-20 12:56:37.870122] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.940 [2024-11-20 12:56:37.870127] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.940 [2024-11-20 12:56:37.879354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-11-20 12:56:37.889997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.940 [2024-11-20 12:56:37.890029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.940 [2024-11-20 12:56:37.890049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.940 [2024-11-20 12:56:37.890055] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.940 [2024-11-20 12:56:37.890060] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.940 [2024-11-20 12:56:37.899918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-11-20 12:56:37.909958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.940 [2024-11-20 12:56:37.909990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.940 [2024-11-20 12:56:37.910001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.940 [2024-11-20 12:56:37.910006] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.940 [2024-11-20 12:56:37.910011] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.940 [2024-11-20 12:56:37.919562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-11-20 12:56:37.929797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.940 [2024-11-20 12:56:37.929826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.940 [2024-11-20 12:56:37.929836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.940 [2024-11-20 12:56:37.929842] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.940 [2024-11-20 12:56:37.929846] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.940 [2024-11-20 12:56:37.939480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-11-20 12:56:37.950329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.940 [2024-11-20 12:56:37.950368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.940 [2024-11-20 12:56:37.950377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.940 [2024-11-20 12:56:37.950382] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.940 [2024-11-20 12:56:37.950387] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.940 [2024-11-20 12:56:37.959509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-11-20 12:56:37.970232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.940 [2024-11-20 12:56:37.970260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.940 [2024-11-20 12:56:37.970269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.940 [2024-11-20 12:56:37.970274] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.940 [2024-11-20 12:56:37.970278] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.940 [2024-11-20 12:56:37.979517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-11-20 12:56:37.990452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.940 [2024-11-20 12:56:37.990480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.940 [2024-11-20 12:56:37.990490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.940 [2024-11-20 12:56:37.990494] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.940 [2024-11-20 12:56:37.990499] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.940 [2024-11-20 12:56:37.999458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-11-20 12:56:38.010087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.940 [2024-11-20 12:56:38.010114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.940 [2024-11-20 12:56:38.010123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.940 [2024-11-20 12:56:38.010128] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.940 [2024-11-20 12:56:38.010132] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.940 [2024-11-20 12:56:38.019683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-11-20 12:56:38.030268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.940 [2024-11-20 12:56:38.030304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.940 [2024-11-20 12:56:38.030316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.940 [2024-11-20 12:56:38.030321] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.940 [2024-11-20 12:56:38.030325] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:04.940 [2024-11-20 12:56:38.039748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.940 qpair failed and we were unable to recover it. 00:29:05.224 [2024-11-20 12:56:38.050697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.224 [2024-11-20 12:56:38.050728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.224 [2024-11-20 12:56:38.050737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.224 [2024-11-20 12:56:38.050742] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.224 [2024-11-20 12:56:38.050747] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.224 [2024-11-20 12:56:38.059694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-11-20 12:56:38.070690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.224 [2024-11-20 12:56:38.070718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.224 [2024-11-20 12:56:38.070728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.224 [2024-11-20 12:56:38.070732] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.224 [2024-11-20 12:56:38.070737] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.224 [2024-11-20 12:56:38.079912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-11-20 12:56:38.090157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.224 [2024-11-20 12:56:38.090190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.224 [2024-11-20 12:56:38.090199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.224 [2024-11-20 12:56:38.090204] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.224 [2024-11-20 12:56:38.090208] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.224 [2024-11-20 12:56:38.100300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-11-20 12:56:38.110499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.224 [2024-11-20 12:56:38.110528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.224 [2024-11-20 12:56:38.110538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.224 [2024-11-20 12:56:38.110542] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.224 [2024-11-20 12:56:38.110547] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.119848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-11-20 12:56:38.130541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.225 [2024-11-20 12:56:38.130569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.225 [2024-11-20 12:56:38.130579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.225 [2024-11-20 12:56:38.130584] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.225 [2024-11-20 12:56:38.130588] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.139922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-11-20 12:56:38.150642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.225 [2024-11-20 12:56:38.150679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.225 [2024-11-20 12:56:38.150689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.225 [2024-11-20 12:56:38.150693] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.225 [2024-11-20 12:56:38.150698] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.160294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-11-20 12:56:38.170629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.225 [2024-11-20 12:56:38.170657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.225 [2024-11-20 12:56:38.170667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.225 [2024-11-20 12:56:38.170672] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.225 [2024-11-20 12:56:38.170676] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.179914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-11-20 12:56:38.191056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.225 [2024-11-20 12:56:38.191095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.225 [2024-11-20 12:56:38.191114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.225 [2024-11-20 12:56:38.191121] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.225 [2024-11-20 12:56:38.191125] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.200277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-11-20 12:56:38.210856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.225 [2024-11-20 12:56:38.210890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.225 [2024-11-20 12:56:38.210904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.225 [2024-11-20 12:56:38.210909] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.225 [2024-11-20 12:56:38.210913] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.220202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-11-20 12:56:38.230527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.225 [2024-11-20 12:56:38.230558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.225 [2024-11-20 12:56:38.230568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.225 [2024-11-20 12:56:38.230573] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.225 [2024-11-20 12:56:38.230577] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.240282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-11-20 12:56:38.250650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.225 [2024-11-20 12:56:38.250679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.225 [2024-11-20 12:56:38.250689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.225 [2024-11-20 12:56:38.250694] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.225 [2024-11-20 12:56:38.250698] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.260207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-11-20 12:56:38.271053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.225 [2024-11-20 12:56:38.271091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.225 [2024-11-20 12:56:38.271111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.225 [2024-11-20 12:56:38.271117] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.225 [2024-11-20 12:56:38.271122] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.280633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-11-20 12:56:38.290995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.225 [2024-11-20 12:56:38.291021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.225 [2024-11-20 12:56:38.291032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.225 [2024-11-20 12:56:38.291037] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.225 [2024-11-20 12:56:38.291045] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.300562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-11-20 12:56:38.310548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.225 [2024-11-20 12:56:38.310577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.225 [2024-11-20 12:56:38.310587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.225 [2024-11-20 12:56:38.310592] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.225 [2024-11-20 12:56:38.310596] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.225 [2024-11-20 12:56:38.320400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.330850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.330879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.330889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.330894] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.330898] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.340677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.351420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.351452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.351461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.351466] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.351471] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.360685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.371382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.371417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.371426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.371431] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.371435] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.380523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.391026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.391057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.391066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.391071] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.391076] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.400773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.410699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.410730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.410739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.410744] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.410748] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.421029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.431359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.431391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.431401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.431405] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.431410] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.440964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.450740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.450771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.450780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.450785] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.450789] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.460916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.471636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.471667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.471676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.471687] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.471691] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.480845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.491413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.491441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.491451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.491455] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.491460] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.501250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.511783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.511816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.511825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.511830] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.511835] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.520947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.531556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.531591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.531600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.531605] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.531609] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.541522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.504 qpair failed and we were unable to recover it. 00:29:05.504 [2024-11-20 12:56:38.551780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.504 [2024-11-20 12:56:38.551806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.504 [2024-11-20 12:56:38.551815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.504 [2024-11-20 12:56:38.551820] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.504 [2024-11-20 12:56:38.551825] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.504 [2024-11-20 12:56:38.561346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.505 qpair failed and we were unable to recover it. 00:29:05.505 [2024-11-20 12:56:38.571555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.505 [2024-11-20 12:56:38.571585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.505 [2024-11-20 12:56:38.571595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.505 [2024-11-20 12:56:38.571599] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.505 [2024-11-20 12:56:38.571603] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.505 [2024-11-20 12:56:38.581210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.505 qpair failed and we were unable to recover it. 00:29:05.505 [2024-11-20 12:56:38.591846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.505 [2024-11-20 12:56:38.591884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.505 [2024-11-20 12:56:38.591893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.505 [2024-11-20 12:56:38.591898] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.505 [2024-11-20 12:56:38.591902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.505 [2024-11-20 12:56:38.601559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.505 qpair failed and we were unable to recover it. 00:29:05.780 [2024-11-20 12:56:38.612090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.780 [2024-11-20 12:56:38.612116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.780 [2024-11-20 12:56:38.612126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.780 [2024-11-20 12:56:38.612130] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.780 [2024-11-20 12:56:38.612135] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.780 [2024-11-20 12:56:38.621536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.780 qpair failed and we were unable to recover it. 00:29:05.780 [2024-11-20 12:56:38.632244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.780 [2024-11-20 12:56:38.632274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.780 [2024-11-20 12:56:38.632283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.780 [2024-11-20 12:56:38.632288] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.780 [2024-11-20 12:56:38.632292] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.780 [2024-11-20 12:56:38.641272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.780 qpair failed and we were unable to recover it. 00:29:05.780 [2024-11-20 12:56:38.651767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.780 [2024-11-20 12:56:38.651795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.780 [2024-11-20 12:56:38.651807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.780 [2024-11-20 12:56:38.651812] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.780 [2024-11-20 12:56:38.651816] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.780 [2024-11-20 12:56:38.661432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.780 qpair failed and we were unable to recover it. 00:29:05.780 [2024-11-20 12:56:38.672228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.780 [2024-11-20 12:56:38.672260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.780 [2024-11-20 12:56:38.672270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.780 [2024-11-20 12:56:38.672275] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.780 [2024-11-20 12:56:38.672279] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.780 [2024-11-20 12:56:38.681423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.780 qpair failed and we were unable to recover it. 00:29:05.780 [2024-11-20 12:56:38.691664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.780 [2024-11-20 12:56:38.691691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.780 [2024-11-20 12:56:38.691700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.780 [2024-11-20 12:56:38.691705] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.780 [2024-11-20 12:56:38.691709] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.780 [2024-11-20 12:56:38.701964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.780 qpair failed and we were unable to recover it. 00:29:05.780 [2024-11-20 12:56:38.712468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.780 [2024-11-20 12:56:38.712505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.780 [2024-11-20 12:56:38.712514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.780 [2024-11-20 12:56:38.712519] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.780 [2024-11-20 12:56:38.712523] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.780 [2024-11-20 12:56:38.721793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.780 qpair failed and we were unable to recover it. 00:29:05.780 [2024-11-20 12:56:38.731686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.780 [2024-11-20 12:56:38.731716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.780 [2024-11-20 12:56:38.731725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.780 [2024-11-20 12:56:38.731730] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.780 [2024-11-20 12:56:38.731737] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.780 [2024-11-20 12:56:38.741849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.780 qpair failed and we were unable to recover it. 00:29:05.781 [2024-11-20 12:56:38.752377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.781 [2024-11-20 12:56:38.752410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.781 [2024-11-20 12:56:38.752420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.781 [2024-11-20 12:56:38.752424] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.781 [2024-11-20 12:56:38.752428] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.781 [2024-11-20 12:56:38.761743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.781 qpair failed and we were unable to recover it. 00:29:05.781 [2024-11-20 12:56:38.772444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.781 [2024-11-20 12:56:38.772474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.781 [2024-11-20 12:56:38.772493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.781 [2024-11-20 12:56:38.772499] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.781 [2024-11-20 12:56:38.772504] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.781 [2024-11-20 12:56:38.781809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.781 qpair failed and we were unable to recover it. 00:29:05.781 [2024-11-20 12:56:38.792671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.781 [2024-11-20 12:56:38.792702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.781 [2024-11-20 12:56:38.792713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.781 [2024-11-20 12:56:38.792718] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.781 [2024-11-20 12:56:38.792722] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.781 [2024-11-20 12:56:38.802199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.781 qpair failed and we were unable to recover it. 00:29:05.781 [2024-11-20 12:56:38.812422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.781 [2024-11-20 12:56:38.812452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.781 [2024-11-20 12:56:38.812471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.781 [2024-11-20 12:56:38.812477] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.781 [2024-11-20 12:56:38.812482] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.781 [2024-11-20 12:56:38.821934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.781 qpair failed and we were unable to recover it. 00:29:05.781 [2024-11-20 12:56:38.832718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.781 [2024-11-20 12:56:38.832749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.781 [2024-11-20 12:56:38.832761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.781 [2024-11-20 12:56:38.832766] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.781 [2024-11-20 12:56:38.832770] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.781 [2024-11-20 12:56:38.842030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.781 qpair failed and we were unable to recover it. 00:29:05.781 [2024-11-20 12:56:38.852869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.781 [2024-11-20 12:56:38.852899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.781 [2024-11-20 12:56:38.852909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.781 [2024-11-20 12:56:38.852914] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.781 [2024-11-20 12:56:38.852918] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:05.781 [2024-11-20 12:56:38.862222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.781 qpair failed and we were unable to recover it. 00:29:05.781 [2024-11-20 12:56:38.872872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.781 [2024-11-20 12:56:38.872897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.781 [2024-11-20 12:56:38.872907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.781 [2024-11-20 12:56:38.872912] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.781 [2024-11-20 12:56:38.872916] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:38.881827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:38.892596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:38.892624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:38.892633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:38.892638] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:38.892642] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:38.902408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:38.913204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:38.913238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:38.913248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:38.913256] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:38.913260] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:38.922022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:38.932627] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:38.932657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:38.932667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:38.932671] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:38.932676] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:38.942272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:38.953232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:38.953264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:38.953273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:38.953278] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:38.953282] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:38.962563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:38.972829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:38.972858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:38.972867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:38.972872] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:38.972876] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:38.982360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:38.993229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:38.993264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:38.993274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:38.993278] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:38.993283] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:39.002527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:39.013466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:39.013498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:39.013517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:39.013523] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:39.013528] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:39.022703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:39.033081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:39.033112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:39.033123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:39.033128] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:39.033132] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:39.042840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:39.052962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:39.052988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:39.052999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:39.053003] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:39.053008] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:39.062655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:39.073403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:39.073438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:39.073447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:39.073452] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:39.073457] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:39.082642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:39.093671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:39.093699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:39.093722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:39.093727] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:39.093732] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.069 [2024-11-20 12:56:39.103075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.069 qpair failed and we were unable to recover it. 00:29:06.069 [2024-11-20 12:56:39.113897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.069 [2024-11-20 12:56:39.113926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.069 [2024-11-20 12:56:39.113945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.069 [2024-11-20 12:56:39.113951] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.069 [2024-11-20 12:56:39.113956] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.070 [2024-11-20 12:56:39.122922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.070 qpair failed and we were unable to recover it. 00:29:06.070 [2024-11-20 12:56:39.133029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.070 [2024-11-20 12:56:39.133056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.070 [2024-11-20 12:56:39.133067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.070 [2024-11-20 12:56:39.133072] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.070 [2024-11-20 12:56:39.133077] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.070 [2024-11-20 12:56:39.142889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.070 qpair failed and we were unable to recover it. 00:29:06.070 [2024-11-20 12:56:39.153679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.070 [2024-11-20 12:56:39.153714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.070 [2024-11-20 12:56:39.153724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.070 [2024-11-20 12:56:39.153729] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.070 [2024-11-20 12:56:39.153733] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.070 [2024-11-20 12:56:39.163035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.070 qpair failed and we were unable to recover it. 00:29:06.369 [2024-11-20 12:56:39.173714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.369 [2024-11-20 12:56:39.173739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.369 [2024-11-20 12:56:39.173749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.369 [2024-11-20 12:56:39.173754] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.369 [2024-11-20 12:56:39.173762] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.369 [2024-11-20 12:56:39.183474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.369 qpair failed and we were unable to recover it. 00:29:06.369 [2024-11-20 12:56:39.193404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.369 [2024-11-20 12:56:39.193427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.369 [2024-11-20 12:56:39.193436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.369 [2024-11-20 12:56:39.193441] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.369 [2024-11-20 12:56:39.193445] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.369 [2024-11-20 12:56:39.203084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.369 qpair failed and we were unable to recover it. 00:29:06.369 [2024-11-20 12:56:39.213450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.369 [2024-11-20 12:56:39.213479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.369 [2024-11-20 12:56:39.213488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.369 [2024-11-20 12:56:39.213493] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.369 [2024-11-20 12:56:39.213497] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.369 [2024-11-20 12:56:39.223182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.369 qpair failed and we were unable to recover it. 00:29:06.369 [2024-11-20 12:56:39.233127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.369 [2024-11-20 12:56:39.233156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.369 [2024-11-20 12:56:39.233166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.369 [2024-11-20 12:56:39.233170] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.369 [2024-11-20 12:56:39.233175] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.369 [2024-11-20 12:56:39.243020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.369 qpair failed and we were unable to recover it. 00:29:06.369 [2024-11-20 12:56:39.253877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.369 [2024-11-20 12:56:39.253908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.253917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.253922] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.253927] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.263211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.370 [2024-11-20 12:56:39.274127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.370 [2024-11-20 12:56:39.274157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.274167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.274172] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.274176] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.283268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.370 [2024-11-20 12:56:39.293647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.370 [2024-11-20 12:56:39.293676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.293695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.293701] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.293706] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.303417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.370 [2024-11-20 12:56:39.314217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.370 [2024-11-20 12:56:39.314248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.314259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.314264] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.314268] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.323296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.370 [2024-11-20 12:56:39.334257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.370 [2024-11-20 12:56:39.334281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.334291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.334296] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.334301] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.343559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.370 [2024-11-20 12:56:39.354208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.370 [2024-11-20 12:56:39.354236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.354245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.354253] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.354258] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.363576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.370 [2024-11-20 12:56:39.373893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.370 [2024-11-20 12:56:39.373922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.373931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.373936] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.373940] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.383700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.370 [2024-11-20 12:56:39.394507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.370 [2024-11-20 12:56:39.394548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.394567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.394573] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.394578] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.403465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.370 [2024-11-20 12:56:39.414339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.370 [2024-11-20 12:56:39.414372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.414383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.414388] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.414393] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.423433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.370 [2024-11-20 12:56:39.433859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.370 [2024-11-20 12:56:39.433889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.433909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.433915] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.433920] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.443756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.370 [2024-11-20 12:56:39.454259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.370 [2024-11-20 12:56:39.454289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.370 [2024-11-20 12:56:39.454309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.370 [2024-11-20 12:56:39.454315] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.370 [2024-11-20 12:56:39.454320] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.370 [2024-11-20 12:56:39.463731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.370 qpair failed and we were unable to recover it. 00:29:06.656 [2024-11-20 12:56:39.474585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.656 [2024-11-20 12:56:39.474613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.656 [2024-11-20 12:56:39.474624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.656 [2024-11-20 12:56:39.474629] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.656 [2024-11-20 12:56:39.474634] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.656 [2024-11-20 12:56:39.483798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.656 qpair failed and we were unable to recover it. 00:29:06.656 [2024-11-20 12:56:39.494866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.656 [2024-11-20 12:56:39.494898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.656 [2024-11-20 12:56:39.494918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.656 [2024-11-20 12:56:39.494924] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.656 [2024-11-20 12:56:39.494929] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.656 [2024-11-20 12:56:39.503832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.656 qpair failed and we were unable to recover it. 00:29:06.656 [2024-11-20 12:56:39.514387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.656 [2024-11-20 12:56:39.514417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.656 [2024-11-20 12:56:39.514436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.656 [2024-11-20 12:56:39.514442] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.656 [2024-11-20 12:56:39.514447] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.656 [2024-11-20 12:56:39.523734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.656 qpair failed and we were unable to recover it. 00:29:06.656 [2024-11-20 12:56:39.533988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.656 [2024-11-20 12:56:39.534018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.656 [2024-11-20 12:56:39.534032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.656 [2024-11-20 12:56:39.534037] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.656 [2024-11-20 12:56:39.534042] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.656 [2024-11-20 12:56:39.544234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.656 qpair failed and we were unable to recover it. 00:29:06.656 [2024-11-20 12:56:39.554767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.656 [2024-11-20 12:56:39.554796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.656 [2024-11-20 12:56:39.554815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.656 [2024-11-20 12:56:39.554821] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.656 [2024-11-20 12:56:39.554826] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.656 [2024-11-20 12:56:39.564202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.656 qpair failed and we were unable to recover it. 00:29:06.656 [2024-11-20 12:56:39.574187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.656 [2024-11-20 12:56:39.574217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.656 [2024-11-20 12:56:39.574227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.656 [2024-11-20 12:56:39.574232] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.656 [2024-11-20 12:56:39.574237] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.657 [2024-11-20 12:56:39.583902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.657 qpair failed and we were unable to recover it. 00:29:06.657 [2024-11-20 12:56:39.595022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.657 [2024-11-20 12:56:39.595056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.657 [2024-11-20 12:56:39.595066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.657 [2024-11-20 12:56:39.595071] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.657 [2024-11-20 12:56:39.595076] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.657 [2024-11-20 12:56:39.604257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.657 qpair failed and we were unable to recover it. 00:29:06.657 [2024-11-20 12:56:39.614602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.657 [2024-11-20 12:56:39.614630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.657 [2024-11-20 12:56:39.614639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.657 [2024-11-20 12:56:39.614644] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.657 [2024-11-20 12:56:39.614649] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.657 [2024-11-20 12:56:39.624526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.657 qpair failed and we were unable to recover it. 00:29:06.657 [2024-11-20 12:56:39.635016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.657 [2024-11-20 12:56:39.635044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.657 [2024-11-20 12:56:39.635054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.657 [2024-11-20 12:56:39.635059] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.657 [2024-11-20 12:56:39.635063] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.657 [2024-11-20 12:56:39.644279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.657 qpair failed and we were unable to recover it. 00:29:06.657 [2024-11-20 12:56:39.655134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.657 [2024-11-20 12:56:39.655162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.657 [2024-11-20 12:56:39.655171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.657 [2024-11-20 12:56:39.655176] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.657 [2024-11-20 12:56:39.655181] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.657 [2024-11-20 12:56:39.664528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.657 qpair failed and we were unable to recover it. 00:29:06.657 [2024-11-20 12:56:39.675550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.657 [2024-11-20 12:56:39.675577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.657 [2024-11-20 12:56:39.675596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.657 [2024-11-20 12:56:39.675602] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.657 [2024-11-20 12:56:39.675607] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.657 [2024-11-20 12:56:39.684311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.657 qpair failed and we were unable to recover it. 00:29:06.657 [2024-11-20 12:56:39.694795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.657 [2024-11-20 12:56:39.694823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.657 [2024-11-20 12:56:39.694833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.657 [2024-11-20 12:56:39.694838] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.657 [2024-11-20 12:56:39.694842] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.657 [2024-11-20 12:56:39.704277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.657 qpair failed and we were unable to recover it. 00:29:06.657 [2024-11-20 12:56:39.715263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.657 [2024-11-20 12:56:39.715300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.657 [2024-11-20 12:56:39.715310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.657 [2024-11-20 12:56:39.715314] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.657 [2024-11-20 12:56:39.715319] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.657 [2024-11-20 12:56:39.724794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.657 qpair failed and we were unable to recover it. 00:29:06.657 [2024-11-20 12:56:39.735205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.657 [2024-11-20 12:56:39.735231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.657 [2024-11-20 12:56:39.735240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.657 [2024-11-20 12:56:39.735245] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.657 [2024-11-20 12:56:39.735250] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.657 [2024-11-20 12:56:39.744616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.657 qpair failed and we were unable to recover it. 00:29:06.657 [2024-11-20 12:56:39.755373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.657 [2024-11-20 12:56:39.755402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.657 [2024-11-20 12:56:39.755411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.657 [2024-11-20 12:56:39.755416] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.657 [2024-11-20 12:56:39.755420] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.941 [2024-11-20 12:56:39.764796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.941 qpair failed and we were unable to recover it. 00:29:06.941 [2024-11-20 12:56:39.774971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.941 [2024-11-20 12:56:39.775004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.941 [2024-11-20 12:56:39.775013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.941 [2024-11-20 12:56:39.775018] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.941 [2024-11-20 12:56:39.775023] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.941 [2024-11-20 12:56:39.784656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.941 qpair failed and we were unable to recover it. 00:29:06.941 [2024-11-20 12:56:39.795384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.941 [2024-11-20 12:56:39.795412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.941 [2024-11-20 12:56:39.795421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.941 [2024-11-20 12:56:39.795426] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.941 [2024-11-20 12:56:39.795434] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.941 [2024-11-20 12:56:39.804425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.941 qpair failed and we were unable to recover it. 00:29:06.941 [2024-11-20 12:56:39.815401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.941 [2024-11-20 12:56:39.815426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.941 [2024-11-20 12:56:39.815436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.941 [2024-11-20 12:56:39.815441] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.941 [2024-11-20 12:56:39.815445] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.941 [2024-11-20 12:56:39.825195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.941 qpair failed and we were unable to recover it. 00:29:06.941 [2024-11-20 12:56:39.835388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.941 [2024-11-20 12:56:39.835416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.941 [2024-11-20 12:56:39.835425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.941 [2024-11-20 12:56:39.835430] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.941 [2024-11-20 12:56:39.835435] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.941 [2024-11-20 12:56:39.844786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.941 qpair failed and we were unable to recover it. 00:29:06.941 [2024-11-20 12:56:39.855192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.941 [2024-11-20 12:56:39.855220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.941 [2024-11-20 12:56:39.855229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.941 [2024-11-20 12:56:39.855234] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.941 [2024-11-20 12:56:39.855238] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.941 [2024-11-20 12:56:39.864945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.941 qpair failed and we were unable to recover it. 00:29:06.941 [2024-11-20 12:56:39.875519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.941 [2024-11-20 12:56:39.875556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.941 [2024-11-20 12:56:39.875565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.941 [2024-11-20 12:56:39.875570] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.941 [2024-11-20 12:56:39.875574] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.941 [2024-11-20 12:56:39.884936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.941 qpair failed and we were unable to recover it. 00:29:06.941 [2024-11-20 12:56:39.895655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.941 [2024-11-20 12:56:39.895686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.941 [2024-11-20 12:56:39.895695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.941 [2024-11-20 12:56:39.895700] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.941 [2024-11-20 12:56:39.895705] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.941 [2024-11-20 12:56:39.905108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.941 qpair failed and we were unable to recover it. 00:29:06.941 [2024-11-20 12:56:39.915811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.941 [2024-11-20 12:56:39.915838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.942 [2024-11-20 12:56:39.915848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.942 [2024-11-20 12:56:39.915852] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.942 [2024-11-20 12:56:39.915857] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.942 [2024-11-20 12:56:39.925098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.942 qpair failed and we were unable to recover it. 00:29:06.942 [2024-11-20 12:56:39.935228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.942 [2024-11-20 12:56:39.935258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.942 [2024-11-20 12:56:39.935268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.942 [2024-11-20 12:56:39.935272] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.942 [2024-11-20 12:56:39.935277] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.942 [2024-11-20 12:56:39.945106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.942 qpair failed and we were unable to recover it. 00:29:06.942 [2024-11-20 12:56:39.955506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.942 [2024-11-20 12:56:39.955536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.942 [2024-11-20 12:56:39.955546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.942 [2024-11-20 12:56:39.955550] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.942 [2024-11-20 12:56:39.955554] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.942 [2024-11-20 12:56:39.964960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.942 qpair failed and we were unable to recover it. 00:29:06.942 [2024-11-20 12:56:39.975847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.942 [2024-11-20 12:56:39.975881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.942 [2024-11-20 12:56:39.975896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.942 [2024-11-20 12:56:39.975901] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.942 [2024-11-20 12:56:39.975905] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.942 [2024-11-20 12:56:39.985114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.942 qpair failed and we were unable to recover it. 00:29:06.942 [2024-11-20 12:56:39.995233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.942 [2024-11-20 12:56:39.995264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.942 [2024-11-20 12:56:39.995273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.942 [2024-11-20 12:56:39.995278] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.942 [2024-11-20 12:56:39.995283] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.942 [2024-11-20 12:56:40.005098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.942 qpair failed and we were unable to recover it. 00:29:06.942 [2024-11-20 12:56:40.015100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.942 [2024-11-20 12:56:40.015131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.942 [2024-11-20 12:56:40.015147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.942 [2024-11-20 12:56:40.015153] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.942 [2024-11-20 12:56:40.015158] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:06.942 [2024-11-20 12:56:40.025625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:06.942 qpair failed and we were unable to recover it. 00:29:07.228 [2024-11-20 12:56:40.036022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.228 [2024-11-20 12:56:40.036059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.228 [2024-11-20 12:56:40.036071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.228 [2024-11-20 12:56:40.036077] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.228 [2024-11-20 12:56:40.036082] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.228 [2024-11-20 12:56:40.045363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.228 qpair failed and we were unable to recover it. 00:29:07.228 [2024-11-20 12:56:40.056022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.228 [2024-11-20 12:56:40.056049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.228 [2024-11-20 12:56:40.056059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.228 [2024-11-20 12:56:40.056065] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.228 [2024-11-20 12:56:40.056070] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.228 [2024-11-20 12:56:40.065513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.228 qpair failed and we were unable to recover it. 00:29:07.228 [2024-11-20 12:56:40.076098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.228 [2024-11-20 12:56:40.076129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.228 [2024-11-20 12:56:40.076139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.228 [2024-11-20 12:56:40.076145] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.228 [2024-11-20 12:56:40.076150] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.228 [2024-11-20 12:56:40.085396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.228 qpair failed and we were unable to recover it. 00:29:07.228 [2024-11-20 12:56:40.095600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.228 [2024-11-20 12:56:40.095628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.228 [2024-11-20 12:56:40.095638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.228 [2024-11-20 12:56:40.095643] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.228 [2024-11-20 12:56:40.095648] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.228 [2024-11-20 12:56:40.105598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.228 qpair failed and we were unable to recover it. 00:29:07.228 [2024-11-20 12:56:40.116408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.228 [2024-11-20 12:56:40.116446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.228 [2024-11-20 12:56:40.116458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.228 [2024-11-20 12:56:40.116463] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.228 [2024-11-20 12:56:40.116468] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.228 [2024-11-20 12:56:40.125502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.228 qpair failed and we were unable to recover it. 00:29:07.228 [2024-11-20 12:56:40.136147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.228 [2024-11-20 12:56:40.136175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.228 [2024-11-20 12:56:40.136185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.228 [2024-11-20 12:56:40.136190] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.228 [2024-11-20 12:56:40.136195] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.228 [2024-11-20 12:56:40.145745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.228 qpair failed and we were unable to recover it. 00:29:07.228 [2024-11-20 12:56:40.155512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.228 [2024-11-20 12:56:40.155545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.228 [2024-11-20 12:56:40.155558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.228 [2024-11-20 12:56:40.155563] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.228 [2024-11-20 12:56:40.155568] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.228 [2024-11-20 12:56:40.165490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.228 qpair failed and we were unable to recover it. 00:29:07.228 [2024-11-20 12:56:40.175859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.228 [2024-11-20 12:56:40.175888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.228 [2024-11-20 12:56:40.175898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.228 [2024-11-20 12:56:40.175903] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.228 [2024-11-20 12:56:40.175907] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.228 [2024-11-20 12:56:40.186076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.228 qpair failed and we were unable to recover it. 00:29:07.228 [2024-11-20 12:56:40.196088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.228 [2024-11-20 12:56:40.196122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.228 [2024-11-20 12:56:40.196131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.228 [2024-11-20 12:56:40.196136] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.228 [2024-11-20 12:56:40.196141] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.228 [2024-11-20 12:56:40.206019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.228 qpair failed and we were unable to recover it. 00:29:07.228 [2024-11-20 12:56:40.216233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.228 [2024-11-20 12:56:40.216265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.228 [2024-11-20 12:56:40.216275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.228 [2024-11-20 12:56:40.216280] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.229 [2024-11-20 12:56:40.216284] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.229 [2024-11-20 12:56:40.226030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.229 qpair failed and we were unable to recover it. 00:29:07.229 [2024-11-20 12:56:40.236765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.229 [2024-11-20 12:56:40.236795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.229 [2024-11-20 12:56:40.236805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.229 [2024-11-20 12:56:40.236810] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.229 [2024-11-20 12:56:40.236817] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.229 [2024-11-20 12:56:40.245857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.229 qpair failed and we were unable to recover it. 00:29:07.229 [2024-11-20 12:56:40.256033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.229 [2024-11-20 12:56:40.256060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.229 [2024-11-20 12:56:40.256069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.229 [2024-11-20 12:56:40.256074] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.229 [2024-11-20 12:56:40.256079] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.229 [2024-11-20 12:56:40.266286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.229 qpair failed and we were unable to recover it. 00:29:07.229 [2024-11-20 12:56:40.276614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.229 [2024-11-20 12:56:40.276645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.229 [2024-11-20 12:56:40.276655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.229 [2024-11-20 12:56:40.276660] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.229 [2024-11-20 12:56:40.276664] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.229 [2024-11-20 12:56:40.285984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.229 qpair failed and we were unable to recover it. 00:29:07.229 [2024-11-20 12:56:40.296893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.229 [2024-11-20 12:56:40.296926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.229 [2024-11-20 12:56:40.296936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.229 [2024-11-20 12:56:40.296941] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.229 [2024-11-20 12:56:40.296945] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.229 [2024-11-20 12:56:40.306281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.229 qpair failed and we were unable to recover it. 00:29:07.229 [2024-11-20 12:56:40.316389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.229 [2024-11-20 12:56:40.316415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.229 [2024-11-20 12:56:40.316424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.229 [2024-11-20 12:56:40.316430] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.229 [2024-11-20 12:56:40.316434] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.538 [2024-11-20 12:56:40.326368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-20 12:56:40.336623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.538 [2024-11-20 12:56:40.336653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.538 [2024-11-20 12:56:40.336663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.538 [2024-11-20 12:56:40.336667] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.538 [2024-11-20 12:56:40.336672] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.538 [2024-11-20 12:56:40.346493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-20 12:56:40.357336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.538 [2024-11-20 12:56:40.357371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.357380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.357385] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.357389] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.366189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.377079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.539 [2024-11-20 12:56:40.377110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.377119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.377124] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.377128] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.386444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.397243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.539 [2024-11-20 12:56:40.397270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.397279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.397285] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.397289] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.406607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.416830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.539 [2024-11-20 12:56:40.416859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.416868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.416876] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.416880] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.426257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.436704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.539 [2024-11-20 12:56:40.436742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.436752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.436757] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.436761] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.446720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.457437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.539 [2024-11-20 12:56:40.457467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.457476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.457481] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.457485] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.467079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.477552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.539 [2024-11-20 12:56:40.477581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.477591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.477596] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.477600] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.486368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.496554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.539 [2024-11-20 12:56:40.496583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.496593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.496598] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.496602] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.506790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.517625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.539 [2024-11-20 12:56:40.517660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.517670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.517674] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.517679] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.526946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.537709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.539 [2024-11-20 12:56:40.537739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.537748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.537753] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.537757] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.546710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.557793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.539 [2024-11-20 12:56:40.557822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.539 [2024-11-20 12:56:40.557842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.539 [2024-11-20 12:56:40.557848] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.539 [2024-11-20 12:56:40.557852] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.539 [2024-11-20 12:56:40.566683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-20 12:56:40.576810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.540 [2024-11-20 12:56:40.576840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.540 [2024-11-20 12:56:40.576851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.540 [2024-11-20 12:56:40.576857] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.540 [2024-11-20 12:56:40.576862] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.540 [2024-11-20 12:56:40.586856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-20 12:56:40.597662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.540 [2024-11-20 12:56:40.597694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.540 [2024-11-20 12:56:40.597707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.540 [2024-11-20 12:56:40.597712] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.540 [2024-11-20 12:56:40.597717] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.540 [2024-11-20 12:56:40.606794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-20 12:56:40.617697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.540 [2024-11-20 12:56:40.617731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.540 [2024-11-20 12:56:40.617741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.540 [2024-11-20 12:56:40.617746] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.540 [2024-11-20 12:56:40.617751] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.540 [2024-11-20 12:56:40.627186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-20 12:56:40.637771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.843 [2024-11-20 12:56:40.637802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.843 [2024-11-20 12:56:40.637811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.843 [2024-11-20 12:56:40.637816] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.843 [2024-11-20 12:56:40.637820] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.646901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.656752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.844 [2024-11-20 12:56:40.656781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.844 [2024-11-20 12:56:40.656790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.844 [2024-11-20 12:56:40.656797] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.844 [2024-11-20 12:56:40.656801] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.667185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.677653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.844 [2024-11-20 12:56:40.677683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.844 [2024-11-20 12:56:40.677692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.844 [2024-11-20 12:56:40.677697] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.844 [2024-11-20 12:56:40.677704] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.687158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.697665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.844 [2024-11-20 12:56:40.697700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.844 [2024-11-20 12:56:40.697710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.844 [2024-11-20 12:56:40.697715] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.844 [2024-11-20 12:56:40.697719] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.707187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.717833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.844 [2024-11-20 12:56:40.717865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.844 [2024-11-20 12:56:40.717885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.844 [2024-11-20 12:56:40.717891] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.844 [2024-11-20 12:56:40.717896] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.727236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.737275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.844 [2024-11-20 12:56:40.737305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.844 [2024-11-20 12:56:40.737315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.844 [2024-11-20 12:56:40.737320] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.844 [2024-11-20 12:56:40.737325] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.747338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.757870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.844 [2024-11-20 12:56:40.757903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.844 [2024-11-20 12:56:40.757913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.844 [2024-11-20 12:56:40.757918] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.844 [2024-11-20 12:56:40.757922] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.767198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.778050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.844 [2024-11-20 12:56:40.778084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.844 [2024-11-20 12:56:40.778103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.844 [2024-11-20 12:56:40.778109] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.844 [2024-11-20 12:56:40.778114] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.787541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.798032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.844 [2024-11-20 12:56:40.798069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.844 [2024-11-20 12:56:40.798080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.844 [2024-11-20 12:56:40.798085] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.844 [2024-11-20 12:56:40.798089] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.807386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.817628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.844 [2024-11-20 12:56:40.817658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.844 [2024-11-20 12:56:40.817668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.844 [2024-11-20 12:56:40.817672] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.844 [2024-11-20 12:56:40.817677] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.827549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.838365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.844 [2024-11-20 12:56:40.838398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.844 [2024-11-20 12:56:40.838417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.844 [2024-11-20 12:56:40.838423] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.844 [2024-11-20 12:56:40.838429] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.844 [2024-11-20 12:56:40.847384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-20 12:56:40.858006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.845 [2024-11-20 12:56:40.858038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.845 [2024-11-20 12:56:40.858048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.845 [2024-11-20 12:56:40.858057] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.845 [2024-11-20 12:56:40.858061] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.845 [2024-11-20 12:56:40.867572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-20 12:56:40.878374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.845 [2024-11-20 12:56:40.878407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.845 [2024-11-20 12:56:40.878427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.845 [2024-11-20 12:56:40.878433] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.845 [2024-11-20 12:56:40.878438] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.845 [2024-11-20 12:56:40.887865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-20 12:56:40.897866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.845 [2024-11-20 12:56:40.897893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.845 [2024-11-20 12:56:40.897904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.845 [2024-11-20 12:56:40.897909] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.845 [2024-11-20 12:56:40.897913] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.845 [2024-11-20 12:56:40.907823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-20 12:56:40.918496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.845 [2024-11-20 12:56:40.918530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.845 [2024-11-20 12:56:40.918550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.845 [2024-11-20 12:56:40.918556] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.845 [2024-11-20 12:56:40.918560] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.845 [2024-11-20 12:56:40.927910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.845 qpair failed and we were unable to recover it. 00:29:08.116 [2024-11-20 12:56:40.938422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-11-20 12:56:40.938453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-11-20 12:56:40.938464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-11-20 12:56:40.938469] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-11-20 12:56:40.938474] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.117 [2024-11-20 12:56:40.948105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-11-20 12:56:40.958409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-11-20 12:56:40.958436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-11-20 12:56:40.958446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-11-20 12:56:40.958451] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-11-20 12:56:40.958456] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.117 [2024-11-20 12:56:40.967913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-11-20 12:56:40.978048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-11-20 12:56:40.978078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-11-20 12:56:40.978087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-11-20 12:56:40.978092] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-11-20 12:56:40.978097] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.117 [2024-11-20 12:56:40.988021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-11-20 12:56:40.998668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-11-20 12:56:40.998702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-11-20 12:56:40.998711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-11-20 12:56:40.998716] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-11-20 12:56:40.998721] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.117 [2024-11-20 12:56:41.007849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-11-20 12:56:41.017926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-11-20 12:56:41.017954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-11-20 12:56:41.017963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-11-20 12:56:41.017968] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-11-20 12:56:41.017972] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.117 [2024-11-20 12:56:41.028443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-11-20 12:56:41.038718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-11-20 12:56:41.038753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-11-20 12:56:41.038764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-11-20 12:56:41.038769] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-11-20 12:56:41.038774] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.117 [2024-11-20 12:56:41.048324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-11-20 12:56:41.058461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-11-20 12:56:41.058495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-11-20 12:56:41.058504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-11-20 12:56:41.058509] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-11-20 12:56:41.058514] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.117 [2024-11-20 12:56:41.068321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-11-20 12:56:41.078959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-11-20 12:56:41.078998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-11-20 12:56:41.079007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-11-20 12:56:41.079012] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-11-20 12:56:41.079016] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.117 [2024-11-20 12:56:41.088359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-11-20 12:56:41.098703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-11-20 12:56:41.098735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-11-20 12:56:41.098744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-11-20 12:56:41.098749] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-11-20 12:56:41.098753] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.117 [2024-11-20 12:56:41.108765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-11-20 12:56:41.118841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-11-20 12:56:41.118869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-11-20 12:56:41.118878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.118 [2024-11-20 12:56:41.118883] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.118 [2024-11-20 12:56:41.118890] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.118 [2024-11-20 12:56:41.128429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.118 qpair failed and we were unable to recover it. 00:29:08.118 [2024-11-20 12:56:41.138461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.118 [2024-11-20 12:56:41.138488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.118 [2024-11-20 12:56:41.138497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.118 [2024-11-20 12:56:41.138502] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.118 [2024-11-20 12:56:41.138506] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.118 [2024-11-20 12:56:41.148382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.118 qpair failed and we were unable to recover it. 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Write completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 Read completed with error (sct=0, sc=8) 00:29:09.061 starting I/O failed 00:29:09.061 [2024-11-20 12:56:42.154258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.061 [2024-11-20 12:56:42.161960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.061 [2024-11-20 12:56:42.162003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.061 [2024-11-20 12:56:42.162021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.061 [2024-11-20 12:56:42.162029] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.061 [2024-11-20 12:56:42.162037] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002bd1c0 00:29:09.321 [2024-11-20 12:56:42.171489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.321 qpair failed and we were unable to recover it. 00:29:09.321 [2024-11-20 12:56:42.182316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.321 [2024-11-20 12:56:42.182354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.321 [2024-11-20 12:56:42.182369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.321 [2024-11-20 12:56:42.182376] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.321 [2024-11-20 12:56:42.182383] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002bd1c0 00:29:09.321 [2024-11-20 12:56:42.191602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.321 qpair failed and we were unable to recover it. 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Read completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 Write completed with error (sct=0, sc=8) 00:29:10.265 starting I/O failed 00:29:10.265 [2024-11-20 12:56:43.197325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.265 [2024-11-20 12:56:43.204549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.265 [2024-11-20 12:56:43.204581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.265 [2024-11-20 12:56:43.204600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.265 [2024-11-20 12:56:43.204608] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.265 [2024-11-20 12:56:43.204615] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:10.265 [2024-11-20 12:56:43.214725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.265 qpair failed and we were unable to recover it. 00:29:10.265 [2024-11-20 12:56:43.224744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.265 [2024-11-20 12:56:43.224776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.265 [2024-11-20 12:56:43.224791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.265 [2024-11-20 12:56:43.224798] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.265 [2024-11-20 12:56:43.224805] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:10.265 [2024-11-20 12:56:43.234514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.265 qpair failed and we were unable to recover it. 00:29:10.265 [2024-11-20 12:56:43.234664] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:10.265 A controller has encountered a failure and is being reset. 00:29:10.265 [2024-11-20 12:56:43.234829] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:10.265 [2024-11-20 12:56:43.273258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:10.265 Controller properly reset. 00:29:10.265 Initializing NVMe Controllers 00:29:10.265 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.265 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.265 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:10.265 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:10.265 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:10.265 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:10.265 Initialization complete. Launching workers. 00:29:10.265 Starting thread on core 1 00:29:10.265 Starting thread on core 2 00:29:10.265 Starting thread on core 3 00:29:10.265 Starting thread on core 0 00:29:10.265 12:56:43 -- host/target_disconnect.sh@59 -- # sync 00:29:10.265 00:29:10.265 real 0m13.659s 00:29:10.265 user 0m27.637s 00:29:10.265 sys 0m2.362s 00:29:10.265 12:56:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:10.265 12:56:43 -- common/autotest_common.sh@10 -- # set +x 00:29:10.265 ************************************ 00:29:10.265 END TEST nvmf_target_disconnect_tc2 00:29:10.265 ************************************ 00:29:10.526 12:56:43 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:29:10.526 12:56:43 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:29:10.526 12:56:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:10.526 12:56:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:10.526 12:56:43 -- common/autotest_common.sh@10 -- # set +x 00:29:10.526 ************************************ 00:29:10.526 START TEST nvmf_target_disconnect_tc3 00:29:10.526 ************************************ 00:29:10.526 12:56:43 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc3 00:29:10.526 12:56:43 -- host/target_disconnect.sh@65 -- # reconnectpid=690884 00:29:10.526 12:56:43 -- host/target_disconnect.sh@67 -- # sleep 2 00:29:10.526 12:56:43 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:29:10.526 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.440 12:56:45 -- host/target_disconnect.sh@68 -- # kill -9 689272 00:29:12.440 12:56:45 -- host/target_disconnect.sh@70 -- # sleep 2 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Read completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed 00:29:13.829 [2024-11-20 12:56:46.584115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 689272 Killed "${NVMF_APP[@]}" "$@" 00:29:14.401 12:56:47 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:29:14.401 12:56:47 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:14.401 12:56:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:14.401 12:56:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:14.401 12:56:47 -- common/autotest_common.sh@10 -- # set +x 00:29:14.401 12:56:47 -- nvmf/common.sh@469 -- # nvmfpid=691766 00:29:14.401 12:56:47 -- nvmf/common.sh@470 -- # waitforlisten 691766 00:29:14.401 12:56:47 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:14.401 12:56:47 -- common/autotest_common.sh@829 -- # '[' -z 691766 ']' 00:29:14.401 12:56:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.401 12:56:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.401 12:56:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.401 12:56:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.401 12:56:47 -- common/autotest_common.sh@10 -- # set +x 00:29:14.401 [2024-11-20 12:56:47.466401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:14.401 [2024-11-20 12:56:47.466453] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.401 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.662 [2024-11-20 12:56:47.543876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.662 Write completed with error (sct=0, sc=8) 00:29:14.662 starting I/O failed 00:29:14.662 Write completed with error (sct=0, sc=8) 00:29:14.662 starting I/O failed 00:29:14.662 Read completed with error (sct=0, sc=8) 00:29:14.662 starting I/O failed 00:29:14.662 Read completed with error (sct=0, sc=8) 00:29:14.662 starting I/O failed 00:29:14.662 Read completed with error (sct=0, sc=8) 00:29:14.662 starting I/O failed 00:29:14.662 Write completed with error (sct=0, sc=8) 00:29:14.662 starting I/O failed 00:29:14.662 Write completed with error (sct=0, sc=8) 00:29:14.662 starting I/O failed 00:29:14.662 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Write completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 Read completed with error (sct=0, sc=8) 00:29:14.663 starting I/O failed 00:29:14.663 [2024-11-20 12:56:47.589688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.663 [2024-11-20 12:56:47.592180] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:14.663 [2024-11-20 12:56:47.592193] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:14.663 [2024-11-20 12:56:47.592204] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:14.663 [2024-11-20 12:56:47.597177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:14.663 [2024-11-20 12:56:47.597271] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.663 [2024-11-20 12:56:47.597277] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.663 [2024-11-20 12:56:47.597282] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.663 [2024-11-20 12:56:47.597417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:14.663 [2024-11-20 12:56:47.597569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:14.663 [2024-11-20 12:56:47.597720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:14.663 [2024-11-20 12:56:47.597722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:15.234 12:56:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.234 12:56:48 -- common/autotest_common.sh@862 -- # return 0 00:29:15.234 12:56:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:15.234 12:56:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.234 12:56:48 -- common/autotest_common.sh@10 -- # set +x 00:29:15.234 12:56:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.234 12:56:48 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:15.234 12:56:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.234 12:56:48 -- common/autotest_common.sh@10 -- # set +x 00:29:15.234 Malloc0 00:29:15.234 12:56:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.234 12:56:48 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:15.234 12:56:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.234 12:56:48 -- common/autotest_common.sh@10 -- # set +x 00:29:15.494 [2024-11-20 12:56:48.343148] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1348b40/0x1354760) succeed. 00:29:15.494 [2024-11-20 12:56:48.354938] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x134a130/0x1395e00) succeed. 00:29:15.494 12:56:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.494 12:56:48 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.494 12:56:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.495 12:56:48 -- common/autotest_common.sh@10 -- # set +x 00:29:15.495 12:56:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.495 12:56:48 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.495 12:56:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.495 12:56:48 -- common/autotest_common.sh@10 -- # set +x 00:29:15.495 12:56:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.495 12:56:48 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:29:15.495 12:56:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.495 12:56:48 -- common/autotest_common.sh@10 -- # set +x 00:29:15.495 [2024-11-20 12:56:48.486971] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:29:15.495 12:56:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.495 12:56:48 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:29:15.495 12:56:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.495 12:56:48 -- common/autotest_common.sh@10 -- # set +x 00:29:15.495 12:56:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.495 12:56:48 -- host/target_disconnect.sh@73 -- # wait 690884 00:29:15.495 [2024-11-20 12:56:48.596467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.495 qpair failed and we were unable to recover it. 00:29:15.495 [2024-11-20 12:56:48.598797] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:15.495 [2024-11-20 12:56:48.598808] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:15.495 [2024-11-20 12:56:48.598813] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:16.881 [2024-11-20 12:56:49.603235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.881 qpair failed and we were unable to recover it. 00:29:16.881 [2024-11-20 12:56:49.605909] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:16.881 [2024-11-20 12:56:49.605920] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:16.881 [2024-11-20 12:56:49.605926] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:17.823 [2024-11-20 12:56:50.610282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.823 qpair failed and we were unable to recover it. 00:29:17.823 [2024-11-20 12:56:50.612953] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:17.823 [2024-11-20 12:56:50.612964] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:17.823 [2024-11-20 12:56:50.612968] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:18.767 [2024-11-20 12:56:51.617219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.767 qpair failed and we were unable to recover it. 00:29:18.767 [2024-11-20 12:56:51.619538] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:18.767 [2024-11-20 12:56:51.619548] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:18.767 [2024-11-20 12:56:51.619553] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:19.710 [2024-11-20 12:56:52.623871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.710 qpair failed and we were unable to recover it. 00:29:19.710 [2024-11-20 12:56:52.626227] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:19.710 [2024-11-20 12:56:52.626240] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:19.710 [2024-11-20 12:56:52.626245] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:20.653 [2024-11-20 12:56:53.630284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.653 qpair failed and we were unable to recover it. 00:29:20.653 [2024-11-20 12:56:53.632410] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:20.653 [2024-11-20 12:56:53.632419] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:20.653 [2024-11-20 12:56:53.632424] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:21.595 [2024-11-20 12:56:54.636379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.595 qpair failed and we were unable to recover it. 00:29:21.595 [2024-11-20 12:56:54.639715] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:21.595 [2024-11-20 12:56:54.639782] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:21.595 [2024-11-20 12:56:54.639805] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:22.981 [2024-11-20 12:56:55.644055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:22.981 qpair failed and we were unable to recover it. 00:29:22.981 [2024-11-20 12:56:55.646614] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:22.981 [2024-11-20 12:56:55.646630] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:22.981 [2024-11-20 12:56:55.646638] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:23.553 [2024-11-20 12:56:56.650901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.553 qpair failed and we were unable to recover it. 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Read completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 Write completed with error (sct=0, sc=8) 00:29:24.936 starting I/O failed 00:29:24.936 [2024-11-20 12:56:57.656656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:24.936 [2024-11-20 12:56:57.659227] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:24.936 [2024-11-20 12:56:57.659243] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:24.936 [2024-11-20 12:56:57.659250] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002bd1c0 00:29:25.877 [2024-11-20 12:56:58.663614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.877 qpair failed and we were unable to recover it. 00:29:25.877 [2024-11-20 12:56:58.666056] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:25.877 [2024-11-20 12:56:58.666065] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:25.877 [2024-11-20 12:56:58.666070] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002bd1c0 00:29:26.818 [2024-11-20 12:56:59.670319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.818 qpair failed and we were unable to recover it. 00:29:26.818 [2024-11-20 12:56:59.670467] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:26.818 A controller has encountered a failure and is being reset. 00:29:26.818 Resorting to new failover address 192.168.100.9 00:29:26.818 [2024-11-20 12:56:59.670514] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.818 [2024-11-20 12:56:59.670540] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:26.818 [2024-11-20 12:56:59.672649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.818 Controller properly reset. 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Read completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.761 Write completed with error (sct=0, sc=8) 00:29:27.761 starting I/O failed 00:29:27.762 Read completed with error (sct=0, sc=8) 00:29:27.762 starting I/O failed 00:29:27.762 Write completed with error (sct=0, sc=8) 00:29:27.762 starting I/O failed 00:29:27.762 Write completed with error (sct=0, sc=8) 00:29:27.762 starting I/O failed 00:29:27.762 [2024-11-20 12:57:00.714329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.762 Initializing NVMe Controllers 00:29:27.762 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.762 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.762 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:27.762 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:27.762 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:27.762 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:27.762 Initialization complete. Launching workers. 00:29:27.762 Starting thread on core 1 00:29:27.762 Starting thread on core 2 00:29:27.762 Starting thread on core 3 00:29:27.762 Starting thread on core 0 00:29:27.762 12:57:00 -- host/target_disconnect.sh@74 -- # sync 00:29:27.762 00:29:27.762 real 0m17.366s 00:29:27.762 user 0m59.893s 00:29:27.762 sys 0m3.773s 00:29:27.762 12:57:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:27.762 12:57:00 -- common/autotest_common.sh@10 -- # set +x 00:29:27.762 ************************************ 00:29:27.762 END TEST nvmf_target_disconnect_tc3 00:29:27.762 ************************************ 00:29:27.762 12:57:00 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:27.762 12:57:00 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:27.762 12:57:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:27.762 12:57:00 -- nvmf/common.sh@116 -- # sync 00:29:27.762 12:57:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:27.762 12:57:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:27.762 12:57:00 -- nvmf/common.sh@119 -- # set +e 00:29:27.762 12:57:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:27.762 12:57:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:27.762 rmmod nvme_rdma 00:29:27.762 rmmod nvme_fabrics 00:29:27.762 12:57:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:27.762 12:57:00 -- nvmf/common.sh@123 -- # set -e 00:29:27.762 12:57:00 -- nvmf/common.sh@124 -- # return 0 00:29:27.762 12:57:00 -- nvmf/common.sh@477 -- # '[' -n 691766 ']' 00:29:27.762 12:57:00 -- nvmf/common.sh@478 -- # killprocess 691766 00:29:27.762 12:57:00 -- common/autotest_common.sh@936 -- # '[' -z 691766 ']' 00:29:27.762 12:57:00 -- common/autotest_common.sh@940 -- # kill -0 691766 00:29:27.762 12:57:00 -- common/autotest_common.sh@941 -- # uname 00:29:27.762 12:57:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:27.762 12:57:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 691766 00:29:28.023 12:57:00 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:29:28.023 12:57:00 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:29:28.023 12:57:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 691766' 00:29:28.023 killing process with pid 691766 00:29:28.023 12:57:00 -- common/autotest_common.sh@955 -- # kill 691766 00:29:28.023 12:57:00 -- common/autotest_common.sh@960 -- # wait 691766 00:29:28.023 12:57:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:28.023 12:57:01 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:28.023 00:29:28.023 real 0m39.938s 00:29:28.023 user 2m24.378s 00:29:28.023 sys 0m11.936s 00:29:28.023 12:57:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:28.023 12:57:01 -- common/autotest_common.sh@10 -- # set +x 00:29:28.023 ************************************ 00:29:28.023 END TEST nvmf_target_disconnect 00:29:28.023 ************************************ 00:29:28.285 12:57:01 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:28.285 12:57:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:28.285 12:57:01 -- common/autotest_common.sh@10 -- # set +x 00:29:28.285 12:57:01 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:28.285 00:29:28.285 real 22m4.267s 00:29:28.286 user 71m29.440s 00:29:28.286 sys 4m49.829s 00:29:28.286 12:57:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:28.286 12:57:01 -- common/autotest_common.sh@10 -- # set +x 00:29:28.286 ************************************ 00:29:28.286 END TEST nvmf_rdma 00:29:28.286 ************************************ 00:29:28.286 12:57:01 -- spdk/autotest.sh@280 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:28.286 12:57:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:28.286 12:57:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:28.286 12:57:01 -- common/autotest_common.sh@10 -- # set +x 00:29:28.286 ************************************ 00:29:28.286 START TEST spdkcli_nvmf_rdma 00:29:28.286 ************************************ 00:29:28.286 12:57:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:28.286 * Looking for test storage... 00:29:28.286 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:29:28.286 12:57:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:28.286 12:57:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:28.286 12:57:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:28.548 12:57:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:28.548 12:57:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:28.548 12:57:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:28.548 12:57:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:28.548 12:57:01 -- scripts/common.sh@335 -- # IFS=.-: 00:29:28.548 12:57:01 -- scripts/common.sh@335 -- # read -ra ver1 00:29:28.548 12:57:01 -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.548 12:57:01 -- scripts/common.sh@336 -- # read -ra ver2 00:29:28.548 12:57:01 -- scripts/common.sh@337 -- # local 'op=<' 00:29:28.548 12:57:01 -- scripts/common.sh@339 -- # ver1_l=2 00:29:28.548 12:57:01 -- scripts/common.sh@340 -- # ver2_l=1 00:29:28.548 12:57:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:28.548 12:57:01 -- scripts/common.sh@343 -- # case "$op" in 00:29:28.548 12:57:01 -- scripts/common.sh@344 -- # : 1 00:29:28.548 12:57:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:28.548 12:57:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.548 12:57:01 -- scripts/common.sh@364 -- # decimal 1 00:29:28.548 12:57:01 -- scripts/common.sh@352 -- # local d=1 00:29:28.548 12:57:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.548 12:57:01 -- scripts/common.sh@354 -- # echo 1 00:29:28.548 12:57:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:28.548 12:57:01 -- scripts/common.sh@365 -- # decimal 2 00:29:28.548 12:57:01 -- scripts/common.sh@352 -- # local d=2 00:29:28.548 12:57:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.548 12:57:01 -- scripts/common.sh@354 -- # echo 2 00:29:28.548 12:57:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:28.548 12:57:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:28.548 12:57:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:28.548 12:57:01 -- scripts/common.sh@367 -- # return 0 00:29:28.548 12:57:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.548 12:57:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.548 --rc genhtml_branch_coverage=1 00:29:28.548 --rc genhtml_function_coverage=1 00:29:28.548 --rc genhtml_legend=1 00:29:28.548 --rc geninfo_all_blocks=1 00:29:28.548 --rc geninfo_unexecuted_blocks=1 00:29:28.548 00:29:28.548 ' 00:29:28.548 12:57:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.548 --rc genhtml_branch_coverage=1 00:29:28.548 --rc genhtml_function_coverage=1 00:29:28.548 --rc genhtml_legend=1 00:29:28.548 --rc geninfo_all_blocks=1 00:29:28.548 --rc geninfo_unexecuted_blocks=1 00:29:28.548 00:29:28.548 ' 00:29:28.548 12:57:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.548 --rc genhtml_branch_coverage=1 00:29:28.548 --rc genhtml_function_coverage=1 00:29:28.548 --rc genhtml_legend=1 00:29:28.548 --rc geninfo_all_blocks=1 00:29:28.548 --rc geninfo_unexecuted_blocks=1 00:29:28.548 00:29:28.548 ' 00:29:28.548 12:57:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.548 --rc genhtml_branch_coverage=1 00:29:28.548 --rc genhtml_function_coverage=1 00:29:28.548 --rc genhtml_legend=1 00:29:28.548 --rc geninfo_all_blocks=1 00:29:28.548 --rc geninfo_unexecuted_blocks=1 00:29:28.548 00:29:28.548 ' 00:29:28.548 12:57:01 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:29:28.548 12:57:01 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:28.549 12:57:01 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:29:28.549 12:57:01 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.549 12:57:01 -- nvmf/common.sh@7 -- # uname -s 00:29:28.549 12:57:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.549 12:57:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.549 12:57:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.549 12:57:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.549 12:57:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.549 12:57:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.549 12:57:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.549 12:57:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.549 12:57:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.549 12:57:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.549 12:57:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:28.549 12:57:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:28.549 12:57:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.549 12:57:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.549 12:57:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.549 12:57:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:28.549 12:57:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.549 12:57:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.549 12:57:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.549 12:57:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.549 12:57:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.549 12:57:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.549 12:57:01 -- paths/export.sh@5 -- # export PATH 00:29:28.549 12:57:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.549 12:57:01 -- nvmf/common.sh@46 -- # : 0 00:29:28.549 12:57:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:28.549 12:57:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:28.549 12:57:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:28.549 12:57:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.549 12:57:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.549 12:57:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:28.549 12:57:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:28.549 12:57:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:28.549 12:57:01 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:28.549 12:57:01 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:28.549 12:57:01 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:28.549 12:57:01 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:28.549 12:57:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:28.549 12:57:01 -- common/autotest_common.sh@10 -- # set +x 00:29:28.549 12:57:01 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:28.549 12:57:01 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=694634 00:29:28.549 12:57:01 -- spdkcli/common.sh@34 -- # waitforlisten 694634 00:29:28.549 12:57:01 -- common/autotest_common.sh@829 -- # '[' -z 694634 ']' 00:29:28.549 12:57:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.549 12:57:01 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:28.549 12:57:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:28.549 12:57:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.549 12:57:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:28.549 12:57:01 -- common/autotest_common.sh@10 -- # set +x 00:29:28.549 [2024-11-20 12:57:01.517899] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:28.549 [2024-11-20 12:57:01.517974] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid694634 ] 00:29:28.549 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.549 [2024-11-20 12:57:01.580783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:28.549 [2024-11-20 12:57:01.647028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:28.549 [2024-11-20 12:57:01.647299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.549 [2024-11-20 12:57:01.647389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.493 12:57:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:29.493 12:57:02 -- common/autotest_common.sh@862 -- # return 0 00:29:29.493 12:57:02 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:29.493 12:57:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.493 12:57:02 -- common/autotest_common.sh@10 -- # set +x 00:29:29.493 12:57:02 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:29.493 12:57:02 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:29:29.493 12:57:02 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:29:29.493 12:57:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:29.493 12:57:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.493 12:57:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:29.493 12:57:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:29.493 12:57:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:29.493 12:57:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.493 12:57:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:29.493 12:57:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.493 12:57:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:29.493 12:57:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:29.493 12:57:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:29.493 12:57:02 -- common/autotest_common.sh@10 -- # set +x 00:29:36.084 12:57:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:36.084 12:57:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:36.084 12:57:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:36.084 12:57:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:36.084 12:57:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:36.084 12:57:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:36.084 12:57:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:36.084 12:57:09 -- nvmf/common.sh@294 -- # net_devs=() 00:29:36.084 12:57:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:36.084 12:57:09 -- nvmf/common.sh@295 -- # e810=() 00:29:36.084 12:57:09 -- nvmf/common.sh@295 -- # local -ga e810 00:29:36.084 12:57:09 -- nvmf/common.sh@296 -- # x722=() 00:29:36.084 12:57:09 -- nvmf/common.sh@296 -- # local -ga x722 00:29:36.084 12:57:09 -- nvmf/common.sh@297 -- # mlx=() 00:29:36.084 12:57:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:36.084 12:57:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.084 12:57:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:36.084 12:57:09 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:36.084 12:57:09 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:36.084 12:57:09 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:36.084 12:57:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:36.084 12:57:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:36.084 12:57:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:29:36.084 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:29:36.084 12:57:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:36.084 12:57:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:36.084 12:57:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:29:36.084 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:29:36.084 12:57:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:36.084 12:57:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:36.084 12:57:09 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:36.084 12:57:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:36.084 12:57:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.085 12:57:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:36.085 12:57:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.085 12:57:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:29:36.085 Found net devices under 0000:98:00.0: mlx_0_0 00:29:36.085 12:57:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.085 12:57:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:36.085 12:57:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.085 12:57:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:36.085 12:57:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.085 12:57:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:29:36.085 Found net devices under 0000:98:00.1: mlx_0_1 00:29:36.085 12:57:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.085 12:57:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:36.085 12:57:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:36.085 12:57:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:36.085 12:57:09 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:36.085 12:57:09 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:36.085 12:57:09 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:36.085 12:57:09 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:36.085 12:57:09 -- nvmf/common.sh@57 -- # uname 00:29:36.085 12:57:09 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:36.085 12:57:09 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:36.085 12:57:09 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:36.085 12:57:09 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:36.085 12:57:09 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:36.085 12:57:09 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:36.085 12:57:09 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:36.346 12:57:09 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:36.346 12:57:09 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:36.346 12:57:09 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:36.346 12:57:09 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:36.346 12:57:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:36.346 12:57:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:36.346 12:57:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:36.346 12:57:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:36.346 12:57:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:36.346 12:57:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:36.346 12:57:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.346 12:57:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:36.346 12:57:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:36.346 12:57:09 -- nvmf/common.sh@104 -- # continue 2 00:29:36.346 12:57:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:36.346 12:57:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.346 12:57:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:36.346 12:57:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.346 12:57:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:36.346 12:57:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:36.346 12:57:09 -- nvmf/common.sh@104 -- # continue 2 00:29:36.346 12:57:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:36.346 12:57:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:36.346 12:57:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:36.346 12:57:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:36.346 12:57:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:36.346 12:57:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:36.346 12:57:09 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:36.346 12:57:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:36.346 12:57:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:36.346 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:36.346 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:29:36.346 altname enp152s0f0np0 00:29:36.346 altname ens817f0np0 00:29:36.346 inet 192.168.100.8/24 scope global mlx_0_0 00:29:36.346 valid_lft forever preferred_lft forever 00:29:36.346 12:57:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:36.346 12:57:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:36.346 12:57:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:36.346 12:57:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:36.346 12:57:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:36.346 12:57:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:36.346 12:57:09 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:36.346 12:57:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:36.346 12:57:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:36.346 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:36.346 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:29:36.346 altname enp152s0f1np1 00:29:36.346 altname ens817f1np1 00:29:36.346 inet 192.168.100.9/24 scope global mlx_0_1 00:29:36.346 valid_lft forever preferred_lft forever 00:29:36.346 12:57:09 -- nvmf/common.sh@410 -- # return 0 00:29:36.346 12:57:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:36.346 12:57:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:36.346 12:57:09 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:36.346 12:57:09 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:36.346 12:57:09 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:36.346 12:57:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:36.346 12:57:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:36.346 12:57:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:36.346 12:57:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:36.346 12:57:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:36.346 12:57:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:36.346 12:57:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.346 12:57:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:36.346 12:57:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:36.346 12:57:09 -- nvmf/common.sh@104 -- # continue 2 00:29:36.346 12:57:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:36.346 12:57:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.346 12:57:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:36.347 12:57:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.347 12:57:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:36.347 12:57:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:36.347 12:57:09 -- nvmf/common.sh@104 -- # continue 2 00:29:36.347 12:57:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:36.347 12:57:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:36.347 12:57:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:36.347 12:57:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:36.347 12:57:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:36.347 12:57:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:36.347 12:57:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:36.347 12:57:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:36.347 12:57:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:36.347 12:57:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:36.347 12:57:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:36.347 12:57:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:36.347 12:57:09 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:36.347 192.168.100.9' 00:29:36.347 12:57:09 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:36.347 192.168.100.9' 00:29:36.347 12:57:09 -- nvmf/common.sh@445 -- # head -n 1 00:29:36.347 12:57:09 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:36.347 12:57:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:36.347 192.168.100.9' 00:29:36.347 12:57:09 -- nvmf/common.sh@446 -- # tail -n +2 00:29:36.347 12:57:09 -- nvmf/common.sh@446 -- # head -n 1 00:29:36.347 12:57:09 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:36.347 12:57:09 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:36.347 12:57:09 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:36.347 12:57:09 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:36.347 12:57:09 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:36.347 12:57:09 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:36.347 12:57:09 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:29:36.347 12:57:09 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:36.347 12:57:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:36.347 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:29:36.347 12:57:09 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:36.347 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:36.347 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:36.347 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:36.347 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:36.347 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:36.347 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:36.347 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:36.347 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:36.347 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:36.347 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:36.347 ' 00:29:36.918 [2024-11-20 12:57:09.769058] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:38.832 [2024-11-20 12:57:11.842634] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2509b60/0x250c2b0) succeed. 00:29:38.832 [2024-11-20 12:57:11.857349] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x250b240/0x254d950) succeed. 00:29:40.218 [2024-11-20 12:57:13.087466] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:29:42.766 [2024-11-20 12:57:15.249939] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:29:44.151 [2024-11-20 12:57:17.107838] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:29:45.538 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:45.538 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:45.538 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:45.538 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:45.538 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:45.538 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:45.538 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:45.538 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:45.538 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:45.538 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:45.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:45.538 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:45.799 12:57:18 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:45.799 12:57:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:45.799 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:29:45.799 12:57:18 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:45.799 12:57:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:45.799 12:57:18 -- common/autotest_common.sh@10 -- # set +x 00:29:45.799 12:57:18 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:45.800 12:57:18 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:46.061 12:57:19 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:46.061 12:57:19 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:46.061 12:57:19 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:46.061 12:57:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:46.061 12:57:19 -- common/autotest_common.sh@10 -- # set +x 00:29:46.061 12:57:19 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:46.061 12:57:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:46.061 12:57:19 -- common/autotest_common.sh@10 -- # set +x 00:29:46.061 12:57:19 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:46.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:46.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:46.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:46.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:29:46.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:29:46.061 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:46.061 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:46.061 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:46.061 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:46.061 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:46.061 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:46.061 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:46.061 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:46.061 ' 00:29:51.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:51.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:51.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:51.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:51.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:29:51.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:29:51.367 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:51.367 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:51.367 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:51.367 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:51.367 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:51.367 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:51.367 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:51.367 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:51.367 12:57:24 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:51.367 12:57:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:51.367 12:57:24 -- common/autotest_common.sh@10 -- # set +x 00:29:51.367 12:57:24 -- spdkcli/nvmf.sh@90 -- # killprocess 694634 00:29:51.367 12:57:24 -- common/autotest_common.sh@936 -- # '[' -z 694634 ']' 00:29:51.367 12:57:24 -- common/autotest_common.sh@940 -- # kill -0 694634 00:29:51.367 12:57:24 -- common/autotest_common.sh@941 -- # uname 00:29:51.367 12:57:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:51.367 12:57:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 694634 00:29:51.628 12:57:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:51.628 12:57:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:51.628 12:57:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 694634' 00:29:51.628 killing process with pid 694634 00:29:51.628 12:57:24 -- common/autotest_common.sh@955 -- # kill 694634 00:29:51.628 [2024-11-20 12:57:24.508297] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:51.628 12:57:24 -- common/autotest_common.sh@960 -- # wait 694634 00:29:51.628 12:57:24 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:29:51.628 12:57:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:51.628 12:57:24 -- nvmf/common.sh@116 -- # sync 00:29:51.628 12:57:24 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:51.628 12:57:24 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:51.628 12:57:24 -- nvmf/common.sh@119 -- # set +e 00:29:51.628 12:57:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:51.628 12:57:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:51.628 rmmod nvme_rdma 00:29:51.628 rmmod nvme_fabrics 00:29:51.889 12:57:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:51.889 12:57:24 -- nvmf/common.sh@123 -- # set -e 00:29:51.889 12:57:24 -- nvmf/common.sh@124 -- # return 0 00:29:51.889 12:57:24 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:29:51.889 12:57:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:51.889 12:57:24 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:51.889 00:29:51.889 real 0m23.512s 00:29:51.889 user 0m50.195s 00:29:51.889 sys 0m6.123s 00:29:51.889 12:57:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:51.889 12:57:24 -- common/autotest_common.sh@10 -- # set +x 00:29:51.889 ************************************ 00:29:51.889 END TEST spdkcli_nvmf_rdma 00:29:51.889 ************************************ 00:29:51.889 12:57:24 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:51.889 12:57:24 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:29:51.889 12:57:24 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:29:51.889 12:57:24 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:51.889 12:57:24 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:51.889 12:57:24 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:29:51.890 12:57:24 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:29:51.890 12:57:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:51.890 12:57:24 -- common/autotest_common.sh@10 -- # set +x 00:29:51.890 12:57:24 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:29:51.890 12:57:24 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:29:51.890 12:57:24 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:29:51.890 12:57:24 -- common/autotest_common.sh@10 -- # set +x 00:30:00.036 INFO: APP EXITING 00:30:00.036 INFO: killing all VMs 00:30:00.036 INFO: killing vhost app 00:30:00.036 INFO: EXIT DONE 00:30:02.585 Waiting for block devices as requested 00:30:02.585 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:02.585 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:02.585 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:02.585 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:02.585 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:02.585 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:02.585 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:02.845 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:02.845 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:03.106 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:03.107 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:03.107 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:03.367 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:03.367 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:03.367 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:03.367 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:03.628 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:07.839 Cleaning 00:30:07.839 Removing: /var/run/dpdk/spdk0/config 00:30:07.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:07.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:07.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:07.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:07.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:07.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:07.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:07.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:07.839 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:07.839 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:07.839 Removing: /var/run/dpdk/spdk1/config 00:30:07.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:07.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:07.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:07.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:07.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:07.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:07.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:07.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:07.839 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:07.839 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:07.839 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:07.839 Removing: /var/run/dpdk/spdk2/config 00:30:07.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:07.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:07.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:07.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:07.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:07.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:07.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:07.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:07.839 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:07.839 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:07.839 Removing: /var/run/dpdk/spdk3/config 00:30:07.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:07.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:07.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:07.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:07.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:07.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:07.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:07.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:07.839 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:07.839 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:07.839 Removing: /var/run/dpdk/spdk4/config 00:30:07.840 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:07.840 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:07.840 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:07.840 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:07.840 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:07.840 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:07.840 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:07.840 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:07.840 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:07.840 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:07.840 Removing: /dev/shm/bdevperf_trace.pid503512 00:30:07.840 Removing: /dev/shm/bdevperf_trace.pid613032 00:30:07.840 Removing: /dev/shm/bdev_svc_trace.1 00:30:07.840 Removing: /dev/shm/nvmf_trace.0 00:30:07.840 Removing: /dev/shm/spdk_tgt_trace.pid308187 00:30:07.840 Removing: /var/run/dpdk/spdk0 00:30:07.840 Removing: /var/run/dpdk/spdk1 00:30:07.840 Removing: /var/run/dpdk/spdk2 00:30:07.840 Removing: /var/run/dpdk/spdk3 00:30:07.840 Removing: /var/run/dpdk/spdk4 00:30:07.840 Removing: /var/run/dpdk/spdk_pid306679 00:30:07.840 Removing: /var/run/dpdk/spdk_pid308187 00:30:07.840 Removing: /var/run/dpdk/spdk_pid308983 00:30:07.840 Removing: /var/run/dpdk/spdk_pid314009 00:30:07.840 Removing: /var/run/dpdk/spdk_pid314693 00:30:07.840 Removing: /var/run/dpdk/spdk_pid315021 00:30:07.840 Removing: /var/run/dpdk/spdk_pid315605 00:30:07.840 Removing: /var/run/dpdk/spdk_pid316191 00:30:07.840 Removing: /var/run/dpdk/spdk_pid316591 00:30:07.840 Removing: /var/run/dpdk/spdk_pid316946 00:30:07.840 Removing: /var/run/dpdk/spdk_pid317201 00:30:07.840 Removing: /var/run/dpdk/spdk_pid317473 00:30:07.840 Removing: /var/run/dpdk/spdk_pid318771 00:30:07.840 Removing: /var/run/dpdk/spdk_pid322368 00:30:07.840 Removing: /var/run/dpdk/spdk_pid322659 00:30:07.840 Removing: /var/run/dpdk/spdk_pid322975 00:30:07.840 Removing: /var/run/dpdk/spdk_pid323131 00:30:07.840 Removing: /var/run/dpdk/spdk_pid323562 00:30:07.840 Removing: /var/run/dpdk/spdk_pid323844 00:30:07.840 Removing: /var/run/dpdk/spdk_pid324231 00:30:07.840 Removing: /var/run/dpdk/spdk_pid324561 00:30:07.840 Removing: /var/run/dpdk/spdk_pid324781 00:30:07.840 Removing: /var/run/dpdk/spdk_pid324945 00:30:07.840 Removing: /var/run/dpdk/spdk_pid325216 00:30:07.840 Removing: /var/run/dpdk/spdk_pid325322 00:30:07.840 Removing: /var/run/dpdk/spdk_pid325766 00:30:07.840 Removing: /var/run/dpdk/spdk_pid326122 00:30:07.840 Removing: /var/run/dpdk/spdk_pid326518 00:30:07.840 Removing: /var/run/dpdk/spdk_pid326834 00:30:07.840 Removing: /var/run/dpdk/spdk_pid326915 00:30:07.840 Removing: /var/run/dpdk/spdk_pid326973 00:30:07.840 Removing: /var/run/dpdk/spdk_pid327307 00:30:07.840 Removing: /var/run/dpdk/spdk_pid327650 00:30:07.840 Removing: /var/run/dpdk/spdk_pid327765 00:30:07.840 Removing: /var/run/dpdk/spdk_pid328034 00:30:07.840 Removing: /var/run/dpdk/spdk_pid328368 00:30:07.840 Removing: /var/run/dpdk/spdk_pid328725 00:30:07.840 Removing: /var/run/dpdk/spdk_pid328913 00:30:07.840 Removing: /var/run/dpdk/spdk_pid329108 00:30:07.840 Removing: /var/run/dpdk/spdk_pid329426 00:30:07.840 Removing: /var/run/dpdk/spdk_pid329781 00:30:07.840 Removing: /var/run/dpdk/spdk_pid330030 00:30:07.840 Removing: /var/run/dpdk/spdk_pid330209 00:30:07.840 Removing: /var/run/dpdk/spdk_pid330488 00:30:07.840 Removing: /var/run/dpdk/spdk_pid330843 00:30:07.840 Removing: /var/run/dpdk/spdk_pid331177 00:30:07.840 Removing: /var/run/dpdk/spdk_pid331343 00:30:07.840 Removing: /var/run/dpdk/spdk_pid331551 00:30:07.840 Removing: /var/run/dpdk/spdk_pid331905 00:30:07.840 Removing: /var/run/dpdk/spdk_pid332239 00:30:07.840 Removing: /var/run/dpdk/spdk_pid332493 00:30:07.840 Removing: /var/run/dpdk/spdk_pid332635 00:30:07.840 Removing: /var/run/dpdk/spdk_pid332962 00:30:07.840 Removing: /var/run/dpdk/spdk_pid333301 00:30:07.840 Removing: /var/run/dpdk/spdk_pid333614 00:30:07.840 Removing: /var/run/dpdk/spdk_pid333740 00:30:07.840 Removing: /var/run/dpdk/spdk_pid334023 00:30:07.840 Removing: /var/run/dpdk/spdk_pid334359 00:30:07.840 Removing: /var/run/dpdk/spdk_pid334711 00:30:07.840 Removing: /var/run/dpdk/spdk_pid334872 00:30:07.840 Removing: /var/run/dpdk/spdk_pid335081 00:30:07.840 Removing: /var/run/dpdk/spdk_pid335415 00:30:07.840 Removing: /var/run/dpdk/spdk_pid335772 00:30:07.840 Removing: /var/run/dpdk/spdk_pid335975 00:30:07.840 Removing: /var/run/dpdk/spdk_pid336172 00:30:07.840 Removing: /var/run/dpdk/spdk_pid336486 00:30:07.840 Removing: /var/run/dpdk/spdk_pid336846 00:30:07.840 Removing: /var/run/dpdk/spdk_pid337159 00:30:07.840 Removing: /var/run/dpdk/spdk_pid337336 00:30:07.840 Removing: /var/run/dpdk/spdk_pid337557 00:30:07.840 Removing: /var/run/dpdk/spdk_pid337907 00:30:07.840 Removing: /var/run/dpdk/spdk_pid338033 00:30:07.840 Removing: /var/run/dpdk/spdk_pid338409 00:30:07.840 Removing: /var/run/dpdk/spdk_pid343069 00:30:07.840 Removing: /var/run/dpdk/spdk_pid464244 00:30:07.840 Removing: /var/run/dpdk/spdk_pid469119 00:30:07.840 Removing: /var/run/dpdk/spdk_pid481000 00:30:07.840 Removing: /var/run/dpdk/spdk_pid487124 00:30:07.840 Removing: /var/run/dpdk/spdk_pid491227 00:30:07.840 Removing: /var/run/dpdk/spdk_pid492209 00:30:07.840 Removing: /var/run/dpdk/spdk_pid503512 00:30:07.840 Removing: /var/run/dpdk/spdk_pid503967 00:30:07.840 Removing: /var/run/dpdk/spdk_pid508566 00:30:07.840 Removing: /var/run/dpdk/spdk_pid515265 00:30:07.840 Removing: /var/run/dpdk/spdk_pid518375 00:30:07.840 Removing: /var/run/dpdk/spdk_pid529929 00:30:07.840 Removing: /var/run/dpdk/spdk_pid559193 00:30:07.840 Removing: /var/run/dpdk/spdk_pid563406 00:30:07.840 Removing: /var/run/dpdk/spdk_pid569314 00:30:07.840 Removing: /var/run/dpdk/spdk_pid610686 00:30:07.840 Removing: /var/run/dpdk/spdk_pid611808 00:30:07.840 Removing: /var/run/dpdk/spdk_pid613032 00:30:07.840 Removing: /var/run/dpdk/spdk_pid617796 00:30:07.840 Removing: /var/run/dpdk/spdk_pid626033 00:30:07.840 Removing: /var/run/dpdk/spdk_pid627058 00:30:07.840 Removing: /var/run/dpdk/spdk_pid628073 00:30:07.840 Removing: /var/run/dpdk/spdk_pid629091 00:30:07.840 Removing: /var/run/dpdk/spdk_pid629567 00:30:07.840 Removing: /var/run/dpdk/spdk_pid634530 00:30:07.840 Removing: /var/run/dpdk/spdk_pid634603 00:30:07.840 Removing: /var/run/dpdk/spdk_pid639281 00:30:07.840 Removing: /var/run/dpdk/spdk_pid639961 00:30:07.840 Removing: /var/run/dpdk/spdk_pid640634 00:30:07.840 Removing: /var/run/dpdk/spdk_pid641638 00:30:07.840 Removing: /var/run/dpdk/spdk_pid641652 00:30:07.840 Removing: /var/run/dpdk/spdk_pid643362 00:30:07.840 Removing: /var/run/dpdk/spdk_pid645574 00:30:07.840 Removing: /var/run/dpdk/spdk_pid647749 00:30:08.102 Removing: /var/run/dpdk/spdk_pid650091 00:30:08.102 Removing: /var/run/dpdk/spdk_pid652699 00:30:08.102 Removing: /var/run/dpdk/spdk_pid654977 00:30:08.102 Removing: /var/run/dpdk/spdk_pid662183 00:30:08.102 Removing: /var/run/dpdk/spdk_pid662947 00:30:08.102 Removing: /var/run/dpdk/spdk_pid664067 00:30:08.102 Removing: /var/run/dpdk/spdk_pid665411 00:30:08.102 Removing: /var/run/dpdk/spdk_pid671326 00:30:08.102 Removing: /var/run/dpdk/spdk_pid674592 00:30:08.102 Removing: /var/run/dpdk/spdk_pid680939 00:30:08.102 Removing: /var/run/dpdk/spdk_pid681274 00:30:08.102 Removing: /var/run/dpdk/spdk_pid688045 00:30:08.102 Removing: /var/run/dpdk/spdk_pid688432 00:30:08.102 Removing: /var/run/dpdk/spdk_pid690884 00:30:08.102 Removing: /var/run/dpdk/spdk_pid694634 00:30:08.102 Clean 00:30:08.102 killing process with pid 248484 00:30:18.107 killing process with pid 248481 00:30:18.107 killing process with pid 248483 00:30:18.107 killing process with pid 248482 00:30:18.107 12:57:49 -- common/autotest_common.sh@1446 -- # return 0 00:30:18.107 12:57:49 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:30:18.107 12:57:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.107 12:57:49 -- common/autotest_common.sh@10 -- # set +x 00:30:18.107 12:57:49 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:30:18.107 12:57:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.107 12:57:49 -- common/autotest_common.sh@10 -- # set +x 00:30:18.107 12:57:49 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:18.107 12:57:49 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:30:18.107 12:57:49 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:30:18.107 12:57:49 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:30:18.107 12:57:49 -- spdk/autotest.sh@383 -- # hostname 00:30:18.107 12:57:49 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-cyp-13 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:30:18.107 geninfo: WARNING: invalid characters removed from testname! 00:30:40.068 12:58:11 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:41.008 12:58:14 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:42.918 12:58:15 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:43.860 12:58:16 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:45.770 12:58:18 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:46.708 12:58:19 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:48.617 12:58:21 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:48.617 12:58:21 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:30:48.617 12:58:21 -- common/autotest_common.sh@1690 -- $ lcov --version 00:30:48.617 12:58:21 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:30:48.617 12:58:21 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:30:48.617 12:58:21 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:30:48.617 12:58:21 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:30:48.617 12:58:21 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:30:48.617 12:58:21 -- scripts/common.sh@335 -- $ IFS=.-: 00:30:48.617 12:58:21 -- scripts/common.sh@335 -- $ read -ra ver1 00:30:48.617 12:58:21 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:48.617 12:58:21 -- scripts/common.sh@336 -- $ read -ra ver2 00:30:48.617 12:58:21 -- scripts/common.sh@337 -- $ local 'op=<' 00:30:48.617 12:58:21 -- scripts/common.sh@339 -- $ ver1_l=2 00:30:48.617 12:58:21 -- scripts/common.sh@340 -- $ ver2_l=1 00:30:48.617 12:58:21 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:30:48.617 12:58:21 -- scripts/common.sh@343 -- $ case "$op" in 00:30:48.617 12:58:21 -- scripts/common.sh@344 -- $ : 1 00:30:48.617 12:58:21 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:30:48.617 12:58:21 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.617 12:58:21 -- scripts/common.sh@364 -- $ decimal 1 00:30:48.617 12:58:21 -- scripts/common.sh@352 -- $ local d=1 00:30:48.617 12:58:21 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:48.617 12:58:21 -- scripts/common.sh@354 -- $ echo 1 00:30:48.617 12:58:21 -- scripts/common.sh@364 -- $ ver1[v]=1 00:30:48.617 12:58:21 -- scripts/common.sh@365 -- $ decimal 2 00:30:48.617 12:58:21 -- scripts/common.sh@352 -- $ local d=2 00:30:48.617 12:58:21 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:48.617 12:58:21 -- scripts/common.sh@354 -- $ echo 2 00:30:48.617 12:58:21 -- scripts/common.sh@365 -- $ ver2[v]=2 00:30:48.617 12:58:21 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:30:48.617 12:58:21 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:30:48.617 12:58:21 -- scripts/common.sh@367 -- $ return 0 00:30:48.617 12:58:21 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.617 12:58:21 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:30:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.617 --rc genhtml_branch_coverage=1 00:30:48.617 --rc genhtml_function_coverage=1 00:30:48.617 --rc genhtml_legend=1 00:30:48.617 --rc geninfo_all_blocks=1 00:30:48.617 --rc geninfo_unexecuted_blocks=1 00:30:48.617 00:30:48.617 ' 00:30:48.617 12:58:21 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:30:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.617 --rc genhtml_branch_coverage=1 00:30:48.617 --rc genhtml_function_coverage=1 00:30:48.617 --rc genhtml_legend=1 00:30:48.617 --rc geninfo_all_blocks=1 00:30:48.617 --rc geninfo_unexecuted_blocks=1 00:30:48.617 00:30:48.617 ' 00:30:48.617 12:58:21 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:30:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.617 --rc genhtml_branch_coverage=1 00:30:48.617 --rc genhtml_function_coverage=1 00:30:48.617 --rc genhtml_legend=1 00:30:48.617 --rc geninfo_all_blocks=1 00:30:48.617 --rc geninfo_unexecuted_blocks=1 00:30:48.617 00:30:48.617 ' 00:30:48.617 12:58:21 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:30:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.617 --rc genhtml_branch_coverage=1 00:30:48.617 --rc genhtml_function_coverage=1 00:30:48.617 --rc genhtml_legend=1 00:30:48.617 --rc geninfo_all_blocks=1 00:30:48.617 --rc geninfo_unexecuted_blocks=1 00:30:48.617 00:30:48.617 ' 00:30:48.617 12:58:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:48.617 12:58:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:48.617 12:58:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.617 12:58:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.617 12:58:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.617 12:58:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.617 12:58:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.617 12:58:21 -- paths/export.sh@5 -- $ export PATH 00:30:48.617 12:58:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.617 12:58:21 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:30:48.617 12:58:21 -- common/autobuild_common.sh@440 -- $ date +%s 00:30:48.617 12:58:21 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732103901.XXXXXX 00:30:48.617 12:58:21 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732103901.W7flCC 00:30:48.617 12:58:21 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:30:48.617 12:58:21 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:30:48.617 12:58:21 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:30:48.617 12:58:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:48.617 12:58:21 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:48.617 12:58:21 -- common/autobuild_common.sh@456 -- $ get_config_params 00:30:48.617 12:58:21 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:30:48.617 12:58:21 -- common/autotest_common.sh@10 -- $ set +x 00:30:48.617 12:58:21 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:30:48.617 12:58:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:30:48.617 12:58:21 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:48.617 12:58:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:48.617 12:58:21 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:48.617 12:58:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:48.617 12:58:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:48.617 12:58:21 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:48.618 12:58:21 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:48.618 12:58:21 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:48.618 12:58:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:48.618 + [[ -n 206066 ]] 00:30:48.618 + sudo kill 206066 00:30:48.629 [Pipeline] } 00:30:48.643 [Pipeline] // stage 00:30:48.648 [Pipeline] } 00:30:48.662 [Pipeline] // timeout 00:30:48.668 [Pipeline] } 00:30:48.681 [Pipeline] // catchError 00:30:48.690 [Pipeline] } 00:30:48.706 [Pipeline] // wrap 00:30:48.712 [Pipeline] } 00:30:48.725 [Pipeline] // catchError 00:30:48.734 [Pipeline] stage 00:30:48.736 [Pipeline] { (Epilogue) 00:30:48.749 [Pipeline] catchError 00:30:48.751 [Pipeline] { 00:30:48.764 [Pipeline] echo 00:30:48.766 Cleanup processes 00:30:48.772 [Pipeline] sh 00:30:49.066 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:49.066 715006 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:49.081 [Pipeline] sh 00:30:49.372 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:49.372 ++ grep -v 'sudo pgrep' 00:30:49.372 ++ awk '{print $1}' 00:30:49.372 + sudo kill -9 00:30:49.372 + true 00:30:49.386 [Pipeline] sh 00:30:49.677 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:01.921 [Pipeline] sh 00:31:02.211 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:02.211 Artifacts sizes are good 00:31:02.226 [Pipeline] archiveArtifacts 00:31:02.233 Archiving artifacts 00:31:02.456 [Pipeline] sh 00:31:02.764 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:31:02.795 [Pipeline] cleanWs 00:31:02.808 [WS-CLEANUP] Deleting project workspace... 00:31:02.808 [WS-CLEANUP] Deferred wipeout is used... 00:31:02.827 [WS-CLEANUP] done 00:31:02.829 [Pipeline] } 00:31:02.845 [Pipeline] // catchError 00:31:02.855 [Pipeline] sh 00:31:03.188 + logger -p user.info -t JENKINS-CI 00:31:03.199 [Pipeline] } 00:31:03.213 [Pipeline] // stage 00:31:03.218 [Pipeline] } 00:31:03.232 [Pipeline] // node 00:31:03.237 [Pipeline] End of Pipeline 00:31:03.272 Finished: SUCCESS